venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Self-Supervised Relationship Probing Abstract Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current humanannotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intraand inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding. 1 Introduction Visual relationships that describe object relationships in images have become more and more important for high-level computer vision (CV) tasks that need complex reasoning [1, 2, 3, 4]. They are often organized in a structured graph representation called scene graph, where nodes represent objects and edges represent relationships between objects. In recent years, we have witnessed great progress with visual relationship datasets such as Visual Genome [5] and the application of scene graphs to various CV reasoning tasks such as image captioning [6, 7], image retrieval [8], and visual reasoning [9]. Despite this, current visual relationship models still rely on human-annotated relationship labels. Due to the combinatorics involved — two objects and one relationship between them, where objects and relationships each have different types — relationships are numerous and have a long-tailed distribution and, thus, it is difficult to collect enough annotations to sufficiently represent important but less frequently observed relationships. Consequently, current visual relationship models tend to focus on modeling only a few relationships that have a large number of human annotations [10], and they ignore relationship categories with few annotations. We have seen some research attempts that use external knowledge databases to help enrich visual relationships, however, the total number of relationships modeled is still relatively small [11]. On the other hand, in the past few years, we have seen significant progress in natural language processing (NLP) towards building contextualized language models with self-supervised pretraining objectives [12, 13]. The removal of human annotators from the training loop has enabled training on massive unlabeled datasets, leading to significant advances in NLP performance [14, 15]. These trends have also brought significant advances in vision-language (VL) pretraining tasks [16, 17, 18, 19, 20]. Most existing VL pretraining methods concatenate visual objects and the corresponding sentences as one input and adopt the Transformer [21] as the core module to learn contextualized multi-modal representations in a self-supervised manner via self- and cross-attentions. These models rely heavily 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. on the multi-head attention layers to explore implicit relations, or they directly rely on attention distributions to explain the relations between objects [17, 22]. However, different layers vary in their behaviors [23, 24], and it has been shown that attention alone can be deceiving when used for interpretability and explanation [25]. Thus, existing VL pretraining algorithms suffer from two problems: discovered relationships are not modeled explicitly, but are instead expected to be implicitly represented as transformer weights; and, the concatenation of multimodal inputs at training time restricts the model to require multimodal inputs at prediction time, as well. Motivated by textual relation mining work in NLP [26], we propose a novel framework that discovers dependencies between objects from the model’s representation space which addresses the problems highlighted above. Our approach is based on two simple observations: (1) when we slightly change the images, the relative visual relationships in those images remain unchanged; (2) relationships mentioned in image descriptions are visually observable in the corresponding image. Our approach relies on three modules, each consisting of a set of layers. In the first module, implicit intra-modal relationships are modeled using transformer encoders. In the second module, cross-modal learning allows for implicit relationship information to be leveraged across modalities. In the third module, relationships between visual and linguistic entities are represented explicitly as latent variables via a technique we call relationship probe. All modules are trained using self-supervision, with a first stage relying on masked language modeling to train the first two modules, and a second stage relying on contrastive learning and linguistic dependency trees as supervisory signals to train the relationship probe network. Our main contribution is a novel self-supervised relationship probing (SSRP) framework for finding dependencies in visual objects or textual entities that address issues with existing visual relationship models: it relies on self-supervision rather than explicit supervision, it explicitly models relationships as latent variables, and it leverages cross-modal learning but allows a single modality as input at prediction time. We conduct extensive experiments to demonstrate that our method can benefit both vision and VL understanding tasks. 2 Background Visual relationships. It has been demonstrated that visual relationships between objects can help improve performance on many CV tasks [8, 27, 28, 29, 30, 31]. Most of these methods assume a known explicit graph structure, and limit the graph to the most frequently occurring predicate categories while ignoring others that do not have enough labeled examples. Relaxing this assumption, some works transfer the object representations learned with predicate functions to rare predicates in few-shot scene graph generation [32, 33, 34]. Other works capture the relations via attention mechanisms [35, 36, 37, 38]. However, unlike object detectors that are trained on unambiguous and objectively defined object class labels, visual relationships are subjective and it is hard to exhaustively annotate all possible relationships between objects. Thus, we do not explicitly define or label visual relationship classes, but instead, we discover the implicit visual relationships via the accompanied captions. We call our method SSRP in the sense that we do not use any explicit predicate labels. Pretraining. Motivated by the huge success of BERT [13] in NLP, there is a growing interest in pretraining generic models to solve a variety of VL problems [39, 40, 22, 40, 18]. These methods generally employ BERT-like objectives to learn cross-modal representations from visual region features and word embeddings. They use self- and cross-attention mechanisms to learn joint representations that are appropriately contextualized in both modalities. However, most of the VL pretraining works heavily rely on massive amounts of visual-linguistic corpus [19, 17]. Moreover, although huge multi-modal training datasets enable pretraining methods to learn good representations for downstream multi-modal VL tasks, they usually do not benefit visual tasks that only deal with single visual modality during inference. We overcome this problem with a new approach that enables the generation of implicit visual object relationships even with only visual inputs during inference, while benefiting greatly from the cross-modality learning objectives during training. We would like to point out that several works focus on investigating the representations learned by transformer-based pretraining models [41, 42]. Their findings suggest that BERT-based network pretraining learns a rich set of intermediate representations of both semantic and syntactic information, which can be used to unearth the representations of dependency grammar relations. An interesting finding in [26] shows that BERT can recover dependency parse trees that have not been encountered during training. Coenen et al. [43] further present empirical descriptions of syntactic representations in BERT. These results in NLP motivate us to exploit BERT to find visual relationships between image regions without explicitly training on relationship annotations. 3 Method Fig. 1 gives an overview of three variants of our method: SSRPShare, SSRPVisual and SSRPCross. Each variant consists of three modules: intra-modality encoder, inter-modality encoder and relationship probe. The main difference among the three SSRP variants lies in the inter-modality encoding process. The intra-modality and inter-modality encoders are BERT-like encoders, that respectively capture implicit single-modality relations and cross-modality relations among the entities (image objects and textual tokens) and output contextual representations. The relationship probe generates relationship graphs for each modality from the encoded contextual representations in a self-supervised way. In the following, we first briefly describe BERT [13] since our approach is based on BERT architecture, and then we describe the individual modules of our SSRP frameworks as well as the learning process. 3.1 Revisiting BERT BERT uses Masked Language Modeling (MLM), a self-supervised pretraining objective that allows a transformer encoder [21] to encode a sequence from both directions simultaneously. Specifically, for an input sequence S = {w1, . . . , wNw} of Nw tokens, BERT first randomly masks out 15% of the tokens and then predicts the masked tokens in the output. The masked tokens in the input sequence are represented by a special symbol [MASK] and fed into a multi-layer transformer encoder. Let H l = {h1, . . . ,hNw} be the encoded features at the l-th transformer layer, with H0 being the input layer. The features at the (l + 1)-th layer are obtained by applying a transformer block defined as: H l+1 = LN ( LN ( H l + f lSelf-Att(H l) ) + f lFF ( LN(H l + f lSelf-Att(H l)) )) (1) where LN stands for layer normalization [44], f lSelf-Att(·) is a multi-headed self-attention sub-layer, fFF(·) is a feed-forward sub-layer composed of two fully-connected (FC) layers, wrapped in residual connection [45] with an LN as specified in Eq. 1. The token representations in the final layer are used to predict the masked tokens independently. 3.2 Model architecture Input embeddings. The input to the three SSRP pretraining models includes both visual and textual elements, where the former is defined as regions-of-interest (RoIs) in an image and the latter is defined as the tokens in a caption. Specifically, given an image I , we use Faster-RCNN [46] to detect RoIs {v1, . . . , vNv} and take the feature vector prior to the output layer of each RoI as the visual feature embedding. For a caption S, we insert the special tokens [CLS] and [SEP] before and after the sentence, and use the WordPiece tokenizer [47] to split it into tokens {w1, . . . , wNw}. Apart from token and visual feature embeddings, we also add positional encoding to represent tokens. In particular, for token wi, its input representation w̃i is the sum of its trainable token embedding, positional embedding (index in the sequence) and segment (image/text) embedding, followed by an LN layer. Each object vi is represented by its positional feature (normalized top-left and bottom-right coordinates) and its 2048-dimensional RoI feature, both of which are transformed through FC+LN layers to obtain the position-aware object-level embedding ṽi. Intra-modality encoding. The purpose of intra-modality encoding is to model the intra-relations of the encoded representations in one modality via self-attention, same as that in BERT. Specifically, we randomly mask out ṽ\i and w̃\j with a fixed probability, and feed the masked object-level embeddings Ṽ = { ṽ1, . . . , ṽ\i, . . . , ṽNv } and word-level embeddings W̃ = { w̃1, . . . , w̃\j , . . . , w̃Nw } into two intra-modality encoders (fV↔VIntra and f S↔S Intra ) separately. Each layer in the intra-modality encoders contains a self-attention sub-layer and an FF sub-layer (Eq. 1). Inter-modality encoding. The inter-modality encoder models the cross-modality relationships between image and textual entities. The three proposed SSRP pretraining models use different inter-modality encoding schemes as illustrated in Fig. 1. In SSRPShare, the inter-modality encoding is done with a single encoder fV SInter that is shared between the two modalities, and f V S Inter consists of a shared self-attention sub-layer wrapped in residual connection with an LN. The shared weights connect the two modalities by causing the projections of the two input modalities to align in the query, key, and value spaces. In SSRPVisual, the textual features attend to visual features to connect the two modalities. In contrast to SSRPShare, we keep fV SInter for the visual branch which contains a self-attention sub-layer and an FF sub-layer, while using fS→VInter for the textual branch which consists of a self-attention sub-layer, one unidirectional cross-attention sub-layer, and an FF sub-layer. Finally, SSRPCross uses an inter-modality bidirectional cross-attention encoder fV↔SInter , where both textual and visual features attend to each other. Following [17], each layer in fV↔SInter consists of two self-attention sub-layers, one bi-directional cross-attention sub-layer, and two FF sub-layers. Relationship probing. The purpose of the relationship probing is to model the implicit relations among visual or textual entities. Specifically, we build a latent relationship graph Gv for the objects in an image and a latent relationship graph Gw for the tokens in a caption, based on the unmasked contextual object representations V = {v1, . . . ,vNv} and token representations W = {w1, . . . ,wNw}, which are the output feature vectors of the inter-modality encoders. Inspired by [26], we use a visual probe and a textual probe to compute the distances for each object pair (vi,vj) ∈ Gv and each token pair (wi,wj) ∈ Gw, respectively. The distance for an object/token pair is defined as: dBu(ui,uj) 2 = (Bu(ui − uj))T (Bu(ui − uj)) (2) where u ∈ {v,w}, i and j are the object/token indices, and Bu are the parameters for the probe layer. The learning goal of a structural probe (Sec. 3.3) is to determine the edge distances between all pairs of nodes. The outputs of the visual probe and the textual probe layer are respectively the distance matrices Rv = (dBv (vi,vj) 2) ∈ RNv×Nv and Rw = (dBw(wi,wj)2) ∈ RNw×Nw , which capture implicit relations between visual/textual entities. 3.3 Learning We employ two learning stages in our method. In the first stage, we train the BERT encoders including the intra-modality encoders and the inter-modality encoders to obtain the contextual object representations V and the token representations W . In the second stage, with these contextual representations, we freeze the BERT encoders and train the two probe layers to generate implicit relationship matrices Rv and Rw. Fig. 2 shows a schematic diagram of our learning framework. 3.3.1 Stage 1: Training BERT encoders Masked language modeling with RoI feature reconstruction. We train the BERT encoders with the MLM objective to predict masked RoI feature vi and masked token wj given their surroundings I\i and S\j . We also include a L1 reconstruction smoothing loss [48] for the grounding of visual features. We minimize the following loss: LMLM = −EI,S∼D [ log p(vi|I\i, S̃) + log p(wj |S\j , Ĩ)− ∑ i L1(vi − g(vi|I\i, S̃)) ] (3) where Ĩ and S̃ are the image regions and input words with random masking, g(.) outputs the unmasked visual feature, p(vi|I\i, S̃) and p(wj |S\j , Ĩ) are respectively the predicted probabilities for the target object label and word given the masked inputs, and I and S are sampled from the training set D. Note that here we reuse the symbols v and w to represent both the visual features and the label/word for simplicity. Image-text matching. An additional loss is added to perform the instance-level alignment between an image and its caption. Both positive (y = 1) and negative (y = 0) image-sentence pairs are sampled and the model learns to align with a binary cross-entropy loss: LMatch = −EI,S∼D[y log p(falign) + (1− y) log(1− p(falign))] (4) where p(falign) is the output probability of a binary classifier and falign is the visual-textual alignment representation. For SSRPShare and SSRPVisual, falign is computed as galign([v̄;wCLS]), where v̄ =∑ i vi/Nv is the visual representation averaged over the contextual features of all the visual elements V , wCLS is the contextual representation of the special token [CLS], and galign(·) is a non-linear mapping function (see supplementary for details). For SSRPCross, we define falign = galign(wCLS). Essentially, we force wCLS to model either the aggregated textual or visual-textual information. The overall training loss for the first-stage pretraining becomes: LStage1 = LMLM + LMatch. 3.3.2 Stage 2: Training relationship probes In the second stage, the relationship probe layers are learned via a probe loss LSProbe and a contrastive lossLCL-all, where the former is to ensure the learned textual relationships Rw is structurally consistent with a dependency tree and the latter is to ensure that the learned relationships Rv and Rw remain stable across different data augmentations. In particular, on the language side, we use a pre-parsed dependency tree Gw for each sentence [49] to guide the textual relationship probe learning with LSProbe defined as: LSProbe = 1 N2w ∑ i,j |dGw(wi,wj)− dBw(wi,wj)2| (5) where dGw(wi,wj) is the distance between tokens wi and wj in the dependency tree Gw. For the contrastive loss, we adopt stochastic data augmentation methods to transform an original image (or sentence) into semantics-preserving data samples, and treat them as positive pairs; see Fig. 2, where Ii ∼ TI and Si ∼ TS denote image and sentence augmentations, respectively.1 For the data augmentation details, please refer to Sec. 4.1. Specifically, we sample a minibatch of Nc image-caption pairs and apply two separate augmentation strategies to each modality, resulting in 2Nc image-caption pairs. For every positive pair, its negative pairs are not sampled explicitly, but 1Note that in the interest of coherence, we describe data augmentation with contrastive learning in Stage 2, the augmented data can be used to train BERT encoders in Stage 1. instead we take the other 2(Nc − 1) augmented image-caption pairs within a minibatch as negatives. We adapt the contrastive loss introduced in [50, 51] to our cross-modal scenario. The single-modality contrastive loss LSCL(i, j) and cross-modality contrastive loss LXCL(i, j) for a positive image-caption pair 〈{Ii, Ij}, {Si, Sj}〉 are defined as: LSCL(i, j) = − log eZ v,v i,j∑2Nc k=1 1[k 6=i]e Zv,vi,k − log e Zw,wi,j∑2Nc k=1 1[k 6=i]e Zw,wi,k (6) LXCL(i, j) = − ∑ m∈{i,j} ∑ n∈{i,j} ( log ( eZv,wm,n∑2Nc k=1 1[k 6=m]e Zv,wm,k ) + log ( eZw,vm,n∑2Nc k=1 1[k 6=m]e Zw,vm,k )) (7) where 1[k 6=i] ∈ {0, 1} is an indicator function,Zx,yi,j = ((zxi >zyj )/(‖zxi ‖‖z y j ‖))/τ denotes the cosine similarity between zxi and z y j , z v and zw are the nonlinear projections of vectorized relationship matrices Rv and Rw projected using MLP projection head [50], and τ is a temperature hyperparameter [52]. The final loss is computed across all positive image-caption pairs in a mini-batch LCL-all = 12Nc ∑ i,j [LSCL(i, j) +LSCL(j, i) +LXCL(i, j)]. Note that LXCL is invariant to the order of sample indices (i, j) and thus is included just once in LCL-all. In this stage, the overall training objective is: LStage2 = LSProbe + LCL-all. 4 Experiments 4.1 Datasets and implementation details Pretraining corpus. To enlarge the training data, recent VL pretraining works [17, 16, 53, 18] use combined pretraining corpora such as Conceptual Captions (CC) [54], SBU captions [55], MSCOCO [56, 57, 58], Flickr30K [59], VQA [1], GQA [2], VG [5], BooksCorpus (BC) [60], and English Wikipedia (EW), etc. In contrast, we only aggregate pretraining data from the train (113k) and validation (5k) splits of MSCOCO [58]. Specifically, with each MSCOCO image associated with five independent caption annotations, MSCOCO provides us an aligned VL dataset of 591K image-and-sentence pairs on 118K distinct images. Table 1 summarizes the corpus used by different pretraining methods. Data augmentation. Instead of combining the existing VL datasets, we expand the pretraining corpus with data augmentation on both images and sentences, as shown in Table 2. For data augmentation on images, we employ horizontal flipping (HFlip) at the image level and a few augmentations at the RoI feature level including HFlip, rotations (90o, 180o, and 270o) and bounding box jittering (with scale factors selected from the range of [0.8, 1.2]). We enrich the training sentences through two pretrained back-translators [61]: English→German→English (En-De-En) and English→Russian→English (EnRu-En). Our augmentation strategies can generate significantly more training samples: 1.65M at RoI level and 1.77M at sentence level, while largely preserving the semantic information. Pretraining setting. We pretrain our three SSRP variants shown in Fig. 1. We set the numbers of layers for the intra-modality encoders of fS↔SIntra and f V↔V Intra to 9 and 5, respectively, and the number of layers for the inter-modality encoders of fV SInter, f S→V Inter , and f V↔S Inter to 5. For each transformer block, we set its hidden size to 768 and the number of heads to 12. To keep the sizes the same for the relationship matrices, the maximum numbers of words and objects are equally set to 36. Pretraining is divided into two stages. In stage 1, we train with LStage 1. At each iteration, we randomly mask input words and RoIs with a probability of 0.15. All models are initialized with BERT pretrained weights and the respective pretraining corpus is listed in Table 2. For cross-modality matching, we replace each sentence with a mismatched one with a probability of 0.5. We use Adam optimizer [62] with a linear learning-rate schedule [13] and a peak learning rate of 1e−4. The training is carried out with four Tesla V100 GPUs with a batch size of 128 for 10 epochs. After stage 1, we freeze the parameters of the intra-modality and inter-modality encoders and further train the relationship probes with LStage 2. The syntactic dependency tree for each sentence is built by [49]. All variants of SSRP are trained for 30 epochs with Adam, a batch size of 512, and a learning of 5e−5. Fine-Tuning tasks. We fine-tune the pretrained models to handle multiple downstream tasks: three VL understanding tasks (NLVR2 [63], VQA [1], and GQA [2]) and a generation task (image captioning), following the standard fine-tuning settings for downstream tasks in [17, 53]. For VL understanding tasks, we use linearly-fused probed relationships and visual-textual alignment prediction falign in Eq. 4 as features. For image captioning, we utilize the Up-Down [64] framework and incorporate the refined object features learned by SSRPVisual. The captioning model is first trained with cross-entropy loss and is then followed by reinforcement learning loss [65]. 4.2 Experimental results & analysis We first perform ablation experiments over a few design choices of our method on NLVR2. We then show the comparison results on VQA, GQA and image captioning tasks. Effect of data augmentation. Table 3 shows the ablation study results. For the ‘Raw’ setting, we pretrain our models only on the original corpus, while in the ‘Aug.’ setting, we augment the original corpus with the augmentation techniques mentioned in Table 2. It is evident that our data augmentation strategy indeed improves the performance of all three models. Note that we employ data augmentation only during pretraining, but not during fine-tuning. Effect of attention. Comparing the three variants that use different attention settings in Table 3, we observe that SSRPCross performs the best, and SSRPVisual is better than SSRPShare. This confirms the benefits of the cross-attention structures that enable the features of one modality to attend to the other. Effect of relationship probing. To analyze the effectiveness of the visual and textual relationships learned via pretraining, we concatenate the visual-textual alignment representation falign and relationships (Rel.) to form a relationshipaware feature vector for answer prediction. Table 3 shows that using language relationships Rw leads to better results than using visual relationships Rv. This is due to the available dependency tree for supervising the language model during training, while the visual relationships are learned in a completely self-supervised way. Combining visual and textual relationships achieves the best results. Our method SSRPCross (75.71) outperforms LXMERT (74.9) and VisualBERT (67.4) on NLVR2 dev-set, demonstrating that the probed relationships are beneficial for the reasoning task. Results on VQA& GQA. Table 4 shows the performance of our SSRPCross on VQA and GQA. Our method outperforms VilBERT and VisualBERT, while being highly competitive with the best method that is trained with considerably larger training corpora. Results on image captioning. Unlike the recent VL pretraining methods, which cannot be applied to single-modality vision tasks such as image captioning due to the cross attention used in pretraining, our SSRPShare and SSRPVisual models do not have such a limitation. Thus, we apply the stronger model SSRPVisual to image captioning using its refined object features and the learned implicit visual relationships. Table 5 shows the quantitative results, where SSRPVisual outperforms the baselines, indicating that the learned relationship-aware image representations can benefit image captioning. Note that the online results of BUTD are achieved with model ensemble, while we use a single model. Results on the online MSCOCO test server BUTD [64] (c5) 80.2 36.9 27.6 117.9 – SSRPVisual (c5) 81.5 37.5 28.3 119.8 – BUTD [64] (c40) 95.2 68.5 36.7 120.5 – SSRPVisual (c40) 95.3 68.6 37.2 122.4 – V is ua l D ep en de nc ie s (S SR P C ro ss ) E n- D e- E n E n- R u- E n Or iginal Image Image (HFlip) Or iginal Image RoI (Jitter ) Or iginal Image RoI (HFlip)Or iginal Image RoI (Rotate) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold)Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse Gold Parse SSRPCrossSSRPCross Figure 3: Examples of generated relationships for different augmented images and sentences. The bottom part shows the dependency trees resulted from SSRPCross outputs. Black edges above each sentence are the gold tree provided by Stanza [49], and red edges are provided by our SSRPCross. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel.Obj . Obj .+Rel. Obj . Obj .+Rel. G iv en Q ue ry I m ag es R et ri ev ed I m ag es ? To p- 2? COCO_val2014_000000485129 COCO_val2014_000000361238COCO_val2014_000000243134 COCO_val2014_000000335089 COCO_val2014_000000099119 COCO_val2014_000000347210 Figure 4: A visualization of the retrieved images on MSCOCO validation set. The ‘Obj.’ method averages object features and computes the cosine similarities between images. The ‘Obj. + Rel.’ method enhances the object features according to the predicted relationships. What do probes learn during training? To answer that, we visualize in Fig. 3 the heat-maps of a few relationship examples generated by SSRPCross, where a darker color indicates a closer relationship. Particularly, the first row shows the example images and their augmented counterparts, each of which contains objects and their probed visual relationships represented by straight lines with varying color intensity values. The second row presents the visual relationship distance graphs for the corresponding images. The bottom rows show the distance graphs and dependency trees for augmented captions. Fig. 3 shows that the probed dependency trees closely resemble the gold dependency trees. In addition, the distance graphs of the original data samples and their augmented counterparts for sentences and images are also close to each other, validating our assumption that the visual/linguistic relationships should be preserved even when data augmentation is applied. Remarkably, the learned implicit relationships between objects are stable across differently augmented images, despite the fact that no gold visual relationships are provided in training. Are visual relationships useful for visual tasks? To further verify the benefits of implicit visual relationships in single-modality visual tasks, we perform image retrieval on MSCOCO with SSRPVisual. Fig. 4 shows the top-2 image retrieval results. As shown, ‘Obj. + Rel.’ retrieves better visuallymatching images that are consistent with the object relationships in query images. For example, in the third example, the person in the top-1 retrieved image is next to a pizza, similar to the original image. This suggests that our model can capture the complex underlying visual relationships. 5 Conclusion We have proposed a self-supervised visual relationship probing method that implicitly learns visual relationships without training on ground-truth relationship annotations. Our method transfers the textual relationships from image descriptions to image objects and explores the visual relationships by maximizing the agreement between differently augmented images via contrastive learning. Through our relationship probes, we have demonstrated that relationship structures in images and sentences can be well explored with well-designed distance and contrastive learning objectives. We believe such implicit relationships in images and languages can help improve many existing vision-language tasks, especially in the scenarios with limited annotations. Broader Impact Current representation learning models such as BERT and alike follow a similar structure. We think it is important to discover or probe the implicit knowledge that these models capture about language and vision. Our research on self-supervised relationship probing is a push in that direction and can be used for grounding the relationships expressed in language. In this paper, we introduce SSRP, a self-supervised relationship probing method for visual and textual relationship extraction. Our research could be used to enrich the current scene graph generation methods and to complete the missing relationships between objects. The visual relationships generated by our method could be applied to a wide range of vision and vision-language applications including image captioning, image retrieval, object detection, visual question answering, visual reasoning, and visual-textual cross-modal retrieval, etc. Here, we discuss the broader impact on the two important example applications (image retrieval and image captioning) which can benefit greatly from the implicit relationships obtained with our method. By performing image retrieval using the implicit visual relationships discovered with our method, visual search engines can provide higher-quality results that better respect the visual relationships contained in query images to users. This provides a smoother visual search experience and helps users find their desired images. On the other hand, for image captions/descriptions, with the implicit visual relationships generated by our method, richer and improved descriptions of images that more accurately describe the scenes in images can be obtained. This can help blind or visually-impaired people [66] ‘see’ their surrounding environments better. In terms of technical impacts, our method opens a new direction to better model visual object relationships, which is completely different from current visual relation models that heavily rely on human-annotated explicit visual relation labels. Annotating visual relationships is a highly subjective process where different annotators are likely to annotate quite differently. Relations are also very diverse and there is no clear definition. Our approach bypasses all these challenges of annotating relations by advocating to discover rich implicit relations directly from natural images and their textual descriptions in a self-supervised manner without using any explicit relation annotations. Thus, our method leads to richer and fairer visual relation model. In addition, in terms of dataset, our method also goes beyond current pretraining models that prefer to combine more and more datasets together for self-supervised training. Instead, our proposed method is developed specifically to work effectively with augmented data that can be cheaply obtained with the proposed augmentation strategies and can be nicely integrated into the self-supervision objectives. Overall, our method makes VL pretraining and visual relationship modeling more accessible to the masses.
1. What is the main contribution of the paper regarding modeling relationships among entities? 2. What are the strengths of the proposed approach, particularly in leveraging cross-modal and intra-model relationships? 3. What are the weaknesses of the paper, especially regarding the use of supervised methods and the limited evaluation of the method's effectiveness? 4. Do you have any concerns about the reliance on strong structural information provided by GT parses? 5. How could the authors improve their evaluation of the method's effectiveness and representation graphs that the model learns?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper suggests a self-supervised approach to modeling relationships among entities in the same modality by leveraging cross-modal and intra-model relationships and evaluates the approach on a suite of vision-and-language tasks. Strengths Learning relationships without supervision, only by leveraging the intrinsic alignment between modalities, is a good idea. Scene graphs can be a powerful representation for many downstream tasks since they provide many useful abstractions, and doing away with the need for labels can bring obvious benefits in training them. Contrastive methods have seen much success in the self-supervised learning community and their application in learning relationship graphs is novel to my knowledge. The technique appears to lead to modest performance gains on NLVR2. The approach does not require massive amounts of data. Weaknesses While the method is referred to as self-supervised, it appears the authors use supervised methods to extract ground truth for object labels and sentence dependency trees. In particular, the dependency trees are used to guide learning of the relationship-graph-inducing distance metric in both vision and language. I am concerned as to whether this method relies on the strong structural information provided by GT parses to guide learning. The authors clarify they call their approach self-supervised w.r.t the lack of predicate labels, but there is still a good amount of supervision seemingly used. While there are some performance gains on NLVR2 as shown in Table 3, more ablations of the benefit of adding "Stage 2" would help further show the effectiveness of the method. In particular, since the approach is framed as one that is more so useful for downstream tasks than on its own, this evaluation seems a little weak. On the other hand, there is also not much analysis presented of the representation graphs that the model does learn. Both these factors combine to make it hard to judge the method's effectiveness as a whole. As an side, it is not clear to me how much variation is acquired by back-translation, or whether that variation is sensible. It may have been interesting to explore other methods of text augmentation.
NIPS
Title Self-Supervised Relationship Probing Abstract Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current humanannotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intraand inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding. 1 Introduction Visual relationships that describe object relationships in images have become more and more important for high-level computer vision (CV) tasks that need complex reasoning [1, 2, 3, 4]. They are often organized in a structured graph representation called scene graph, where nodes represent objects and edges represent relationships between objects. In recent years, we have witnessed great progress with visual relationship datasets such as Visual Genome [5] and the application of scene graphs to various CV reasoning tasks such as image captioning [6, 7], image retrieval [8], and visual reasoning [9]. Despite this, current visual relationship models still rely on human-annotated relationship labels. Due to the combinatorics involved — two objects and one relationship between them, where objects and relationships each have different types — relationships are numerous and have a long-tailed distribution and, thus, it is difficult to collect enough annotations to sufficiently represent important but less frequently observed relationships. Consequently, current visual relationship models tend to focus on modeling only a few relationships that have a large number of human annotations [10], and they ignore relationship categories with few annotations. We have seen some research attempts that use external knowledge databases to help enrich visual relationships, however, the total number of relationships modeled is still relatively small [11]. On the other hand, in the past few years, we have seen significant progress in natural language processing (NLP) towards building contextualized language models with self-supervised pretraining objectives [12, 13]. The removal of human annotators from the training loop has enabled training on massive unlabeled datasets, leading to significant advances in NLP performance [14, 15]. These trends have also brought significant advances in vision-language (VL) pretraining tasks [16, 17, 18, 19, 20]. Most existing VL pretraining methods concatenate visual objects and the corresponding sentences as one input and adopt the Transformer [21] as the core module to learn contextualized multi-modal representations in a self-supervised manner via self- and cross-attentions. These models rely heavily 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. on the multi-head attention layers to explore implicit relations, or they directly rely on attention distributions to explain the relations between objects [17, 22]. However, different layers vary in their behaviors [23, 24], and it has been shown that attention alone can be deceiving when used for interpretability and explanation [25]. Thus, existing VL pretraining algorithms suffer from two problems: discovered relationships are not modeled explicitly, but are instead expected to be implicitly represented as transformer weights; and, the concatenation of multimodal inputs at training time restricts the model to require multimodal inputs at prediction time, as well. Motivated by textual relation mining work in NLP [26], we propose a novel framework that discovers dependencies between objects from the model’s representation space which addresses the problems highlighted above. Our approach is based on two simple observations: (1) when we slightly change the images, the relative visual relationships in those images remain unchanged; (2) relationships mentioned in image descriptions are visually observable in the corresponding image. Our approach relies on three modules, each consisting of a set of layers. In the first module, implicit intra-modal relationships are modeled using transformer encoders. In the second module, cross-modal learning allows for implicit relationship information to be leveraged across modalities. In the third module, relationships between visual and linguistic entities are represented explicitly as latent variables via a technique we call relationship probe. All modules are trained using self-supervision, with a first stage relying on masked language modeling to train the first two modules, and a second stage relying on contrastive learning and linguistic dependency trees as supervisory signals to train the relationship probe network. Our main contribution is a novel self-supervised relationship probing (SSRP) framework for finding dependencies in visual objects or textual entities that address issues with existing visual relationship models: it relies on self-supervision rather than explicit supervision, it explicitly models relationships as latent variables, and it leverages cross-modal learning but allows a single modality as input at prediction time. We conduct extensive experiments to demonstrate that our method can benefit both vision and VL understanding tasks. 2 Background Visual relationships. It has been demonstrated that visual relationships between objects can help improve performance on many CV tasks [8, 27, 28, 29, 30, 31]. Most of these methods assume a known explicit graph structure, and limit the graph to the most frequently occurring predicate categories while ignoring others that do not have enough labeled examples. Relaxing this assumption, some works transfer the object representations learned with predicate functions to rare predicates in few-shot scene graph generation [32, 33, 34]. Other works capture the relations via attention mechanisms [35, 36, 37, 38]. However, unlike object detectors that are trained on unambiguous and objectively defined object class labels, visual relationships are subjective and it is hard to exhaustively annotate all possible relationships between objects. Thus, we do not explicitly define or label visual relationship classes, but instead, we discover the implicit visual relationships via the accompanied captions. We call our method SSRP in the sense that we do not use any explicit predicate labels. Pretraining. Motivated by the huge success of BERT [13] in NLP, there is a growing interest in pretraining generic models to solve a variety of VL problems [39, 40, 22, 40, 18]. These methods generally employ BERT-like objectives to learn cross-modal representations from visual region features and word embeddings. They use self- and cross-attention mechanisms to learn joint representations that are appropriately contextualized in both modalities. However, most of the VL pretraining works heavily rely on massive amounts of visual-linguistic corpus [19, 17]. Moreover, although huge multi-modal training datasets enable pretraining methods to learn good representations for downstream multi-modal VL tasks, they usually do not benefit visual tasks that only deal with single visual modality during inference. We overcome this problem with a new approach that enables the generation of implicit visual object relationships even with only visual inputs during inference, while benefiting greatly from the cross-modality learning objectives during training. We would like to point out that several works focus on investigating the representations learned by transformer-based pretraining models [41, 42]. Their findings suggest that BERT-based network pretraining learns a rich set of intermediate representations of both semantic and syntactic information, which can be used to unearth the representations of dependency grammar relations. An interesting finding in [26] shows that BERT can recover dependency parse trees that have not been encountered during training. Coenen et al. [43] further present empirical descriptions of syntactic representations in BERT. These results in NLP motivate us to exploit BERT to find visual relationships between image regions without explicitly training on relationship annotations. 3 Method Fig. 1 gives an overview of three variants of our method: SSRPShare, SSRPVisual and SSRPCross. Each variant consists of three modules: intra-modality encoder, inter-modality encoder and relationship probe. The main difference among the three SSRP variants lies in the inter-modality encoding process. The intra-modality and inter-modality encoders are BERT-like encoders, that respectively capture implicit single-modality relations and cross-modality relations among the entities (image objects and textual tokens) and output contextual representations. The relationship probe generates relationship graphs for each modality from the encoded contextual representations in a self-supervised way. In the following, we first briefly describe BERT [13] since our approach is based on BERT architecture, and then we describe the individual modules of our SSRP frameworks as well as the learning process. 3.1 Revisiting BERT BERT uses Masked Language Modeling (MLM), a self-supervised pretraining objective that allows a transformer encoder [21] to encode a sequence from both directions simultaneously. Specifically, for an input sequence S = {w1, . . . , wNw} of Nw tokens, BERT first randomly masks out 15% of the tokens and then predicts the masked tokens in the output. The masked tokens in the input sequence are represented by a special symbol [MASK] and fed into a multi-layer transformer encoder. Let H l = {h1, . . . ,hNw} be the encoded features at the l-th transformer layer, with H0 being the input layer. The features at the (l + 1)-th layer are obtained by applying a transformer block defined as: H l+1 = LN ( LN ( H l + f lSelf-Att(H l) ) + f lFF ( LN(H l + f lSelf-Att(H l)) )) (1) where LN stands for layer normalization [44], f lSelf-Att(·) is a multi-headed self-attention sub-layer, fFF(·) is a feed-forward sub-layer composed of two fully-connected (FC) layers, wrapped in residual connection [45] with an LN as specified in Eq. 1. The token representations in the final layer are used to predict the masked tokens independently. 3.2 Model architecture Input embeddings. The input to the three SSRP pretraining models includes both visual and textual elements, where the former is defined as regions-of-interest (RoIs) in an image and the latter is defined as the tokens in a caption. Specifically, given an image I , we use Faster-RCNN [46] to detect RoIs {v1, . . . , vNv} and take the feature vector prior to the output layer of each RoI as the visual feature embedding. For a caption S, we insert the special tokens [CLS] and [SEP] before and after the sentence, and use the WordPiece tokenizer [47] to split it into tokens {w1, . . . , wNw}. Apart from token and visual feature embeddings, we also add positional encoding to represent tokens. In particular, for token wi, its input representation w̃i is the sum of its trainable token embedding, positional embedding (index in the sequence) and segment (image/text) embedding, followed by an LN layer. Each object vi is represented by its positional feature (normalized top-left and bottom-right coordinates) and its 2048-dimensional RoI feature, both of which are transformed through FC+LN layers to obtain the position-aware object-level embedding ṽi. Intra-modality encoding. The purpose of intra-modality encoding is to model the intra-relations of the encoded representations in one modality via self-attention, same as that in BERT. Specifically, we randomly mask out ṽ\i and w̃\j with a fixed probability, and feed the masked object-level embeddings Ṽ = { ṽ1, . . . , ṽ\i, . . . , ṽNv } and word-level embeddings W̃ = { w̃1, . . . , w̃\j , . . . , w̃Nw } into two intra-modality encoders (fV↔VIntra and f S↔S Intra ) separately. Each layer in the intra-modality encoders contains a self-attention sub-layer and an FF sub-layer (Eq. 1). Inter-modality encoding. The inter-modality encoder models the cross-modality relationships between image and textual entities. The three proposed SSRP pretraining models use different inter-modality encoding schemes as illustrated in Fig. 1. In SSRPShare, the inter-modality encoding is done with a single encoder fV SInter that is shared between the two modalities, and f V S Inter consists of a shared self-attention sub-layer wrapped in residual connection with an LN. The shared weights connect the two modalities by causing the projections of the two input modalities to align in the query, key, and value spaces. In SSRPVisual, the textual features attend to visual features to connect the two modalities. In contrast to SSRPShare, we keep fV SInter for the visual branch which contains a self-attention sub-layer and an FF sub-layer, while using fS→VInter for the textual branch which consists of a self-attention sub-layer, one unidirectional cross-attention sub-layer, and an FF sub-layer. Finally, SSRPCross uses an inter-modality bidirectional cross-attention encoder fV↔SInter , where both textual and visual features attend to each other. Following [17], each layer in fV↔SInter consists of two self-attention sub-layers, one bi-directional cross-attention sub-layer, and two FF sub-layers. Relationship probing. The purpose of the relationship probing is to model the implicit relations among visual or textual entities. Specifically, we build a latent relationship graph Gv for the objects in an image and a latent relationship graph Gw for the tokens in a caption, based on the unmasked contextual object representations V = {v1, . . . ,vNv} and token representations W = {w1, . . . ,wNw}, which are the output feature vectors of the inter-modality encoders. Inspired by [26], we use a visual probe and a textual probe to compute the distances for each object pair (vi,vj) ∈ Gv and each token pair (wi,wj) ∈ Gw, respectively. The distance for an object/token pair is defined as: dBu(ui,uj) 2 = (Bu(ui − uj))T (Bu(ui − uj)) (2) where u ∈ {v,w}, i and j are the object/token indices, and Bu are the parameters for the probe layer. The learning goal of a structural probe (Sec. 3.3) is to determine the edge distances between all pairs of nodes. The outputs of the visual probe and the textual probe layer are respectively the distance matrices Rv = (dBv (vi,vj) 2) ∈ RNv×Nv and Rw = (dBw(wi,wj)2) ∈ RNw×Nw , which capture implicit relations between visual/textual entities. 3.3 Learning We employ two learning stages in our method. In the first stage, we train the BERT encoders including the intra-modality encoders and the inter-modality encoders to obtain the contextual object representations V and the token representations W . In the second stage, with these contextual representations, we freeze the BERT encoders and train the two probe layers to generate implicit relationship matrices Rv and Rw. Fig. 2 shows a schematic diagram of our learning framework. 3.3.1 Stage 1: Training BERT encoders Masked language modeling with RoI feature reconstruction. We train the BERT encoders with the MLM objective to predict masked RoI feature vi and masked token wj given their surroundings I\i and S\j . We also include a L1 reconstruction smoothing loss [48] for the grounding of visual features. We minimize the following loss: LMLM = −EI,S∼D [ log p(vi|I\i, S̃) + log p(wj |S\j , Ĩ)− ∑ i L1(vi − g(vi|I\i, S̃)) ] (3) where Ĩ and S̃ are the image regions and input words with random masking, g(.) outputs the unmasked visual feature, p(vi|I\i, S̃) and p(wj |S\j , Ĩ) are respectively the predicted probabilities for the target object label and word given the masked inputs, and I and S are sampled from the training set D. Note that here we reuse the symbols v and w to represent both the visual features and the label/word for simplicity. Image-text matching. An additional loss is added to perform the instance-level alignment between an image and its caption. Both positive (y = 1) and negative (y = 0) image-sentence pairs are sampled and the model learns to align with a binary cross-entropy loss: LMatch = −EI,S∼D[y log p(falign) + (1− y) log(1− p(falign))] (4) where p(falign) is the output probability of a binary classifier and falign is the visual-textual alignment representation. For SSRPShare and SSRPVisual, falign is computed as galign([v̄;wCLS]), where v̄ =∑ i vi/Nv is the visual representation averaged over the contextual features of all the visual elements V , wCLS is the contextual representation of the special token [CLS], and galign(·) is a non-linear mapping function (see supplementary for details). For SSRPCross, we define falign = galign(wCLS). Essentially, we force wCLS to model either the aggregated textual or visual-textual information. The overall training loss for the first-stage pretraining becomes: LStage1 = LMLM + LMatch. 3.3.2 Stage 2: Training relationship probes In the second stage, the relationship probe layers are learned via a probe loss LSProbe and a contrastive lossLCL-all, where the former is to ensure the learned textual relationships Rw is structurally consistent with a dependency tree and the latter is to ensure that the learned relationships Rv and Rw remain stable across different data augmentations. In particular, on the language side, we use a pre-parsed dependency tree Gw for each sentence [49] to guide the textual relationship probe learning with LSProbe defined as: LSProbe = 1 N2w ∑ i,j |dGw(wi,wj)− dBw(wi,wj)2| (5) where dGw(wi,wj) is the distance between tokens wi and wj in the dependency tree Gw. For the contrastive loss, we adopt stochastic data augmentation methods to transform an original image (or sentence) into semantics-preserving data samples, and treat them as positive pairs; see Fig. 2, where Ii ∼ TI and Si ∼ TS denote image and sentence augmentations, respectively.1 For the data augmentation details, please refer to Sec. 4.1. Specifically, we sample a minibatch of Nc image-caption pairs and apply two separate augmentation strategies to each modality, resulting in 2Nc image-caption pairs. For every positive pair, its negative pairs are not sampled explicitly, but 1Note that in the interest of coherence, we describe data augmentation with contrastive learning in Stage 2, the augmented data can be used to train BERT encoders in Stage 1. instead we take the other 2(Nc − 1) augmented image-caption pairs within a minibatch as negatives. We adapt the contrastive loss introduced in [50, 51] to our cross-modal scenario. The single-modality contrastive loss LSCL(i, j) and cross-modality contrastive loss LXCL(i, j) for a positive image-caption pair 〈{Ii, Ij}, {Si, Sj}〉 are defined as: LSCL(i, j) = − log eZ v,v i,j∑2Nc k=1 1[k 6=i]e Zv,vi,k − log e Zw,wi,j∑2Nc k=1 1[k 6=i]e Zw,wi,k (6) LXCL(i, j) = − ∑ m∈{i,j} ∑ n∈{i,j} ( log ( eZv,wm,n∑2Nc k=1 1[k 6=m]e Zv,wm,k ) + log ( eZw,vm,n∑2Nc k=1 1[k 6=m]e Zw,vm,k )) (7) where 1[k 6=i] ∈ {0, 1} is an indicator function,Zx,yi,j = ((zxi >zyj )/(‖zxi ‖‖z y j ‖))/τ denotes the cosine similarity between zxi and z y j , z v and zw are the nonlinear projections of vectorized relationship matrices Rv and Rw projected using MLP projection head [50], and τ is a temperature hyperparameter [52]. The final loss is computed across all positive image-caption pairs in a mini-batch LCL-all = 12Nc ∑ i,j [LSCL(i, j) +LSCL(j, i) +LXCL(i, j)]. Note that LXCL is invariant to the order of sample indices (i, j) and thus is included just once in LCL-all. In this stage, the overall training objective is: LStage2 = LSProbe + LCL-all. 4 Experiments 4.1 Datasets and implementation details Pretraining corpus. To enlarge the training data, recent VL pretraining works [17, 16, 53, 18] use combined pretraining corpora such as Conceptual Captions (CC) [54], SBU captions [55], MSCOCO [56, 57, 58], Flickr30K [59], VQA [1], GQA [2], VG [5], BooksCorpus (BC) [60], and English Wikipedia (EW), etc. In contrast, we only aggregate pretraining data from the train (113k) and validation (5k) splits of MSCOCO [58]. Specifically, with each MSCOCO image associated with five independent caption annotations, MSCOCO provides us an aligned VL dataset of 591K image-and-sentence pairs on 118K distinct images. Table 1 summarizes the corpus used by different pretraining methods. Data augmentation. Instead of combining the existing VL datasets, we expand the pretraining corpus with data augmentation on both images and sentences, as shown in Table 2. For data augmentation on images, we employ horizontal flipping (HFlip) at the image level and a few augmentations at the RoI feature level including HFlip, rotations (90o, 180o, and 270o) and bounding box jittering (with scale factors selected from the range of [0.8, 1.2]). We enrich the training sentences through two pretrained back-translators [61]: English→German→English (En-De-En) and English→Russian→English (EnRu-En). Our augmentation strategies can generate significantly more training samples: 1.65M at RoI level and 1.77M at sentence level, while largely preserving the semantic information. Pretraining setting. We pretrain our three SSRP variants shown in Fig. 1. We set the numbers of layers for the intra-modality encoders of fS↔SIntra and f V↔V Intra to 9 and 5, respectively, and the number of layers for the inter-modality encoders of fV SInter, f S→V Inter , and f V↔S Inter to 5. For each transformer block, we set its hidden size to 768 and the number of heads to 12. To keep the sizes the same for the relationship matrices, the maximum numbers of words and objects are equally set to 36. Pretraining is divided into two stages. In stage 1, we train with LStage 1. At each iteration, we randomly mask input words and RoIs with a probability of 0.15. All models are initialized with BERT pretrained weights and the respective pretraining corpus is listed in Table 2. For cross-modality matching, we replace each sentence with a mismatched one with a probability of 0.5. We use Adam optimizer [62] with a linear learning-rate schedule [13] and a peak learning rate of 1e−4. The training is carried out with four Tesla V100 GPUs with a batch size of 128 for 10 epochs. After stage 1, we freeze the parameters of the intra-modality and inter-modality encoders and further train the relationship probes with LStage 2. The syntactic dependency tree for each sentence is built by [49]. All variants of SSRP are trained for 30 epochs with Adam, a batch size of 512, and a learning of 5e−5. Fine-Tuning tasks. We fine-tune the pretrained models to handle multiple downstream tasks: three VL understanding tasks (NLVR2 [63], VQA [1], and GQA [2]) and a generation task (image captioning), following the standard fine-tuning settings for downstream tasks in [17, 53]. For VL understanding tasks, we use linearly-fused probed relationships and visual-textual alignment prediction falign in Eq. 4 as features. For image captioning, we utilize the Up-Down [64] framework and incorporate the refined object features learned by SSRPVisual. The captioning model is first trained with cross-entropy loss and is then followed by reinforcement learning loss [65]. 4.2 Experimental results & analysis We first perform ablation experiments over a few design choices of our method on NLVR2. We then show the comparison results on VQA, GQA and image captioning tasks. Effect of data augmentation. Table 3 shows the ablation study results. For the ‘Raw’ setting, we pretrain our models only on the original corpus, while in the ‘Aug.’ setting, we augment the original corpus with the augmentation techniques mentioned in Table 2. It is evident that our data augmentation strategy indeed improves the performance of all three models. Note that we employ data augmentation only during pretraining, but not during fine-tuning. Effect of attention. Comparing the three variants that use different attention settings in Table 3, we observe that SSRPCross performs the best, and SSRPVisual is better than SSRPShare. This confirms the benefits of the cross-attention structures that enable the features of one modality to attend to the other. Effect of relationship probing. To analyze the effectiveness of the visual and textual relationships learned via pretraining, we concatenate the visual-textual alignment representation falign and relationships (Rel.) to form a relationshipaware feature vector for answer prediction. Table 3 shows that using language relationships Rw leads to better results than using visual relationships Rv. This is due to the available dependency tree for supervising the language model during training, while the visual relationships are learned in a completely self-supervised way. Combining visual and textual relationships achieves the best results. Our method SSRPCross (75.71) outperforms LXMERT (74.9) and VisualBERT (67.4) on NLVR2 dev-set, demonstrating that the probed relationships are beneficial for the reasoning task. Results on VQA& GQA. Table 4 shows the performance of our SSRPCross on VQA and GQA. Our method outperforms VilBERT and VisualBERT, while being highly competitive with the best method that is trained with considerably larger training corpora. Results on image captioning. Unlike the recent VL pretraining methods, which cannot be applied to single-modality vision tasks such as image captioning due to the cross attention used in pretraining, our SSRPShare and SSRPVisual models do not have such a limitation. Thus, we apply the stronger model SSRPVisual to image captioning using its refined object features and the learned implicit visual relationships. Table 5 shows the quantitative results, where SSRPVisual outperforms the baselines, indicating that the learned relationship-aware image representations can benefit image captioning. Note that the online results of BUTD are achieved with model ensemble, while we use a single model. Results on the online MSCOCO test server BUTD [64] (c5) 80.2 36.9 27.6 117.9 – SSRPVisual (c5) 81.5 37.5 28.3 119.8 – BUTD [64] (c40) 95.2 68.5 36.7 120.5 – SSRPVisual (c40) 95.3 68.6 37.2 122.4 – V is ua l D ep en de nc ie s (S SR P C ro ss ) E n- D e- E n E n- R u- E n Or iginal Image Image (HFlip) Or iginal Image RoI (Jitter ) Or iginal Image RoI (HFlip)Or iginal Image RoI (Rotate) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold)Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse Gold Parse SSRPCrossSSRPCross Figure 3: Examples of generated relationships for different augmented images and sentences. The bottom part shows the dependency trees resulted from SSRPCross outputs. Black edges above each sentence are the gold tree provided by Stanza [49], and red edges are provided by our SSRPCross. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel.Obj . Obj .+Rel. Obj . Obj .+Rel. G iv en Q ue ry I m ag es R et ri ev ed I m ag es ? To p- 2? COCO_val2014_000000485129 COCO_val2014_000000361238COCO_val2014_000000243134 COCO_val2014_000000335089 COCO_val2014_000000099119 COCO_val2014_000000347210 Figure 4: A visualization of the retrieved images on MSCOCO validation set. The ‘Obj.’ method averages object features and computes the cosine similarities between images. The ‘Obj. + Rel.’ method enhances the object features according to the predicted relationships. What do probes learn during training? To answer that, we visualize in Fig. 3 the heat-maps of a few relationship examples generated by SSRPCross, where a darker color indicates a closer relationship. Particularly, the first row shows the example images and their augmented counterparts, each of which contains objects and their probed visual relationships represented by straight lines with varying color intensity values. The second row presents the visual relationship distance graphs for the corresponding images. The bottom rows show the distance graphs and dependency trees for augmented captions. Fig. 3 shows that the probed dependency trees closely resemble the gold dependency trees. In addition, the distance graphs of the original data samples and their augmented counterparts for sentences and images are also close to each other, validating our assumption that the visual/linguistic relationships should be preserved even when data augmentation is applied. Remarkably, the learned implicit relationships between objects are stable across differently augmented images, despite the fact that no gold visual relationships are provided in training. Are visual relationships useful for visual tasks? To further verify the benefits of implicit visual relationships in single-modality visual tasks, we perform image retrieval on MSCOCO with SSRPVisual. Fig. 4 shows the top-2 image retrieval results. As shown, ‘Obj. + Rel.’ retrieves better visuallymatching images that are consistent with the object relationships in query images. For example, in the third example, the person in the top-1 retrieved image is next to a pizza, similar to the original image. This suggests that our model can capture the complex underlying visual relationships. 5 Conclusion We have proposed a self-supervised visual relationship probing method that implicitly learns visual relationships without training on ground-truth relationship annotations. Our method transfers the textual relationships from image descriptions to image objects and explores the visual relationships by maximizing the agreement between differently augmented images via contrastive learning. Through our relationship probes, we have demonstrated that relationship structures in images and sentences can be well explored with well-designed distance and contrastive learning objectives. We believe such implicit relationships in images and languages can help improve many existing vision-language tasks, especially in the scenarios with limited annotations. Broader Impact Current representation learning models such as BERT and alike follow a similar structure. We think it is important to discover or probe the implicit knowledge that these models capture about language and vision. Our research on self-supervised relationship probing is a push in that direction and can be used for grounding the relationships expressed in language. In this paper, we introduce SSRP, a self-supervised relationship probing method for visual and textual relationship extraction. Our research could be used to enrich the current scene graph generation methods and to complete the missing relationships between objects. The visual relationships generated by our method could be applied to a wide range of vision and vision-language applications including image captioning, image retrieval, object detection, visual question answering, visual reasoning, and visual-textual cross-modal retrieval, etc. Here, we discuss the broader impact on the two important example applications (image retrieval and image captioning) which can benefit greatly from the implicit relationships obtained with our method. By performing image retrieval using the implicit visual relationships discovered with our method, visual search engines can provide higher-quality results that better respect the visual relationships contained in query images to users. This provides a smoother visual search experience and helps users find their desired images. On the other hand, for image captions/descriptions, with the implicit visual relationships generated by our method, richer and improved descriptions of images that more accurately describe the scenes in images can be obtained. This can help blind or visually-impaired people [66] ‘see’ their surrounding environments better. In terms of technical impacts, our method opens a new direction to better model visual object relationships, which is completely different from current visual relation models that heavily rely on human-annotated explicit visual relation labels. Annotating visual relationships is a highly subjective process where different annotators are likely to annotate quite differently. Relations are also very diverse and there is no clear definition. Our approach bypasses all these challenges of annotating relations by advocating to discover rich implicit relations directly from natural images and their textual descriptions in a self-supervised manner without using any explicit relation annotations. Thus, our method leads to richer and fairer visual relation model. In addition, in terms of dataset, our method also goes beyond current pretraining models that prefer to combine more and more datasets together for self-supervised training. Instead, our proposed method is developed specifically to work effectively with augmented data that can be cheaply obtained with the proposed augmentation strategies and can be nicely integrated into the self-supervision objectives. Overall, our method makes VL pretraining and visual relationship modeling more accessible to the masses.
1. What is the main contribution of the paper regarding self-supervised learning? 2. What are the strengths of the proposed approach, particularly in its integration of various methods? 3. What are the weaknesses of the paper, especially regarding its complexity and lack of clarity in certain aspects? 4. Do you have any concerns regarding the sampling process for image-sentence pairs? 5. How would the performance of the proposed method change when trained with larger corpora?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a self-supervised learning method that implicitly learns the visual relationships without relying on visual relationship annotations. The proposed method integrates several methods for self-supervision, and benefit various vision-language tasks. Strengths - The proposed self-supervised framework can learn visual relationships without using any relationship annotations, which avoids the limitations caused by manual labeling. - The experimental results show that the self-supervised learning method can benefit both vision and VL understanding tasks. Weaknesses - The proposed method is complicated, and it actually is the combination of a modified version of the masked language model and contrastive learning. So the contribution should be the application of these methods to implicit relationship learning but not a totally new framework. - Line 175, the authors say that both positive and negative image-sentence pairs are sampled. Since the image-text matching loss is applied at the same time with reconstruction loss in Stage 1, I think the authors should give a clearer explanation of how to sample single images and image pairs at the same time. - The SSRP method is complicated and contains various of losses or components for self-supervised learning. The author should provide more ablation results such as removing image-text matching loss. - In Table 4 and Table 5, the authors use different datasets or different settings compared to other methods. I am curious about what the performance is if training SSRP with the larger corpora like VL-BERT* in Table 4.
NIPS
Title Self-Supervised Relationship Probing Abstract Structured representations of images that model visual relationships are beneficial for many vision and vision-language applications. However, current humanannotated visual relationship datasets suffer from the long-tailed predicate distribution problem which limits the potential of visual relationship models. In this work, we introduce a self-supervised method that implicitly learns the visual relationships without relying on any ground-truth visual relationship annotations. Our method relies on 1) intraand inter-modality encodings to respectively model relationships within each modality separately and jointly, and 2) relationship probing, which seeks to discover the graph structure within each modality. By leveraging masked language modeling, contrastive learning, and dependency tree distances for self-supervision, our method learns better object features as well as implicit visual relationships. We verify the effectiveness of our proposed method on various vision-language tasks that benefit from improved visual relationship understanding. 1 Introduction Visual relationships that describe object relationships in images have become more and more important for high-level computer vision (CV) tasks that need complex reasoning [1, 2, 3, 4]. They are often organized in a structured graph representation called scene graph, where nodes represent objects and edges represent relationships between objects. In recent years, we have witnessed great progress with visual relationship datasets such as Visual Genome [5] and the application of scene graphs to various CV reasoning tasks such as image captioning [6, 7], image retrieval [8], and visual reasoning [9]. Despite this, current visual relationship models still rely on human-annotated relationship labels. Due to the combinatorics involved — two objects and one relationship between them, where objects and relationships each have different types — relationships are numerous and have a long-tailed distribution and, thus, it is difficult to collect enough annotations to sufficiently represent important but less frequently observed relationships. Consequently, current visual relationship models tend to focus on modeling only a few relationships that have a large number of human annotations [10], and they ignore relationship categories with few annotations. We have seen some research attempts that use external knowledge databases to help enrich visual relationships, however, the total number of relationships modeled is still relatively small [11]. On the other hand, in the past few years, we have seen significant progress in natural language processing (NLP) towards building contextualized language models with self-supervised pretraining objectives [12, 13]. The removal of human annotators from the training loop has enabled training on massive unlabeled datasets, leading to significant advances in NLP performance [14, 15]. These trends have also brought significant advances in vision-language (VL) pretraining tasks [16, 17, 18, 19, 20]. Most existing VL pretraining methods concatenate visual objects and the corresponding sentences as one input and adopt the Transformer [21] as the core module to learn contextualized multi-modal representations in a self-supervised manner via self- and cross-attentions. These models rely heavily 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. on the multi-head attention layers to explore implicit relations, or they directly rely on attention distributions to explain the relations between objects [17, 22]. However, different layers vary in their behaviors [23, 24], and it has been shown that attention alone can be deceiving when used for interpretability and explanation [25]. Thus, existing VL pretraining algorithms suffer from two problems: discovered relationships are not modeled explicitly, but are instead expected to be implicitly represented as transformer weights; and, the concatenation of multimodal inputs at training time restricts the model to require multimodal inputs at prediction time, as well. Motivated by textual relation mining work in NLP [26], we propose a novel framework that discovers dependencies between objects from the model’s representation space which addresses the problems highlighted above. Our approach is based on two simple observations: (1) when we slightly change the images, the relative visual relationships in those images remain unchanged; (2) relationships mentioned in image descriptions are visually observable in the corresponding image. Our approach relies on three modules, each consisting of a set of layers. In the first module, implicit intra-modal relationships are modeled using transformer encoders. In the second module, cross-modal learning allows for implicit relationship information to be leveraged across modalities. In the third module, relationships between visual and linguistic entities are represented explicitly as latent variables via a technique we call relationship probe. All modules are trained using self-supervision, with a first stage relying on masked language modeling to train the first two modules, and a second stage relying on contrastive learning and linguistic dependency trees as supervisory signals to train the relationship probe network. Our main contribution is a novel self-supervised relationship probing (SSRP) framework for finding dependencies in visual objects or textual entities that address issues with existing visual relationship models: it relies on self-supervision rather than explicit supervision, it explicitly models relationships as latent variables, and it leverages cross-modal learning but allows a single modality as input at prediction time. We conduct extensive experiments to demonstrate that our method can benefit both vision and VL understanding tasks. 2 Background Visual relationships. It has been demonstrated that visual relationships between objects can help improve performance on many CV tasks [8, 27, 28, 29, 30, 31]. Most of these methods assume a known explicit graph structure, and limit the graph to the most frequently occurring predicate categories while ignoring others that do not have enough labeled examples. Relaxing this assumption, some works transfer the object representations learned with predicate functions to rare predicates in few-shot scene graph generation [32, 33, 34]. Other works capture the relations via attention mechanisms [35, 36, 37, 38]. However, unlike object detectors that are trained on unambiguous and objectively defined object class labels, visual relationships are subjective and it is hard to exhaustively annotate all possible relationships between objects. Thus, we do not explicitly define or label visual relationship classes, but instead, we discover the implicit visual relationships via the accompanied captions. We call our method SSRP in the sense that we do not use any explicit predicate labels. Pretraining. Motivated by the huge success of BERT [13] in NLP, there is a growing interest in pretraining generic models to solve a variety of VL problems [39, 40, 22, 40, 18]. These methods generally employ BERT-like objectives to learn cross-modal representations from visual region features and word embeddings. They use self- and cross-attention mechanisms to learn joint representations that are appropriately contextualized in both modalities. However, most of the VL pretraining works heavily rely on massive amounts of visual-linguistic corpus [19, 17]. Moreover, although huge multi-modal training datasets enable pretraining methods to learn good representations for downstream multi-modal VL tasks, they usually do not benefit visual tasks that only deal with single visual modality during inference. We overcome this problem with a new approach that enables the generation of implicit visual object relationships even with only visual inputs during inference, while benefiting greatly from the cross-modality learning objectives during training. We would like to point out that several works focus on investigating the representations learned by transformer-based pretraining models [41, 42]. Their findings suggest that BERT-based network pretraining learns a rich set of intermediate representations of both semantic and syntactic information, which can be used to unearth the representations of dependency grammar relations. An interesting finding in [26] shows that BERT can recover dependency parse trees that have not been encountered during training. Coenen et al. [43] further present empirical descriptions of syntactic representations in BERT. These results in NLP motivate us to exploit BERT to find visual relationships between image regions without explicitly training on relationship annotations. 3 Method Fig. 1 gives an overview of three variants of our method: SSRPShare, SSRPVisual and SSRPCross. Each variant consists of three modules: intra-modality encoder, inter-modality encoder and relationship probe. The main difference among the three SSRP variants lies in the inter-modality encoding process. The intra-modality and inter-modality encoders are BERT-like encoders, that respectively capture implicit single-modality relations and cross-modality relations among the entities (image objects and textual tokens) and output contextual representations. The relationship probe generates relationship graphs for each modality from the encoded contextual representations in a self-supervised way. In the following, we first briefly describe BERT [13] since our approach is based on BERT architecture, and then we describe the individual modules of our SSRP frameworks as well as the learning process. 3.1 Revisiting BERT BERT uses Masked Language Modeling (MLM), a self-supervised pretraining objective that allows a transformer encoder [21] to encode a sequence from both directions simultaneously. Specifically, for an input sequence S = {w1, . . . , wNw} of Nw tokens, BERT first randomly masks out 15% of the tokens and then predicts the masked tokens in the output. The masked tokens in the input sequence are represented by a special symbol [MASK] and fed into a multi-layer transformer encoder. Let H l = {h1, . . . ,hNw} be the encoded features at the l-th transformer layer, with H0 being the input layer. The features at the (l + 1)-th layer are obtained by applying a transformer block defined as: H l+1 = LN ( LN ( H l + f lSelf-Att(H l) ) + f lFF ( LN(H l + f lSelf-Att(H l)) )) (1) where LN stands for layer normalization [44], f lSelf-Att(·) is a multi-headed self-attention sub-layer, fFF(·) is a feed-forward sub-layer composed of two fully-connected (FC) layers, wrapped in residual connection [45] with an LN as specified in Eq. 1. The token representations in the final layer are used to predict the masked tokens independently. 3.2 Model architecture Input embeddings. The input to the three SSRP pretraining models includes both visual and textual elements, where the former is defined as regions-of-interest (RoIs) in an image and the latter is defined as the tokens in a caption. Specifically, given an image I , we use Faster-RCNN [46] to detect RoIs {v1, . . . , vNv} and take the feature vector prior to the output layer of each RoI as the visual feature embedding. For a caption S, we insert the special tokens [CLS] and [SEP] before and after the sentence, and use the WordPiece tokenizer [47] to split it into tokens {w1, . . . , wNw}. Apart from token and visual feature embeddings, we also add positional encoding to represent tokens. In particular, for token wi, its input representation w̃i is the sum of its trainable token embedding, positional embedding (index in the sequence) and segment (image/text) embedding, followed by an LN layer. Each object vi is represented by its positional feature (normalized top-left and bottom-right coordinates) and its 2048-dimensional RoI feature, both of which are transformed through FC+LN layers to obtain the position-aware object-level embedding ṽi. Intra-modality encoding. The purpose of intra-modality encoding is to model the intra-relations of the encoded representations in one modality via self-attention, same as that in BERT. Specifically, we randomly mask out ṽ\i and w̃\j with a fixed probability, and feed the masked object-level embeddings Ṽ = { ṽ1, . . . , ṽ\i, . . . , ṽNv } and word-level embeddings W̃ = { w̃1, . . . , w̃\j , . . . , w̃Nw } into two intra-modality encoders (fV↔VIntra and f S↔S Intra ) separately. Each layer in the intra-modality encoders contains a self-attention sub-layer and an FF sub-layer (Eq. 1). Inter-modality encoding. The inter-modality encoder models the cross-modality relationships between image and textual entities. The three proposed SSRP pretraining models use different inter-modality encoding schemes as illustrated in Fig. 1. In SSRPShare, the inter-modality encoding is done with a single encoder fV SInter that is shared between the two modalities, and f V S Inter consists of a shared self-attention sub-layer wrapped in residual connection with an LN. The shared weights connect the two modalities by causing the projections of the two input modalities to align in the query, key, and value spaces. In SSRPVisual, the textual features attend to visual features to connect the two modalities. In contrast to SSRPShare, we keep fV SInter for the visual branch which contains a self-attention sub-layer and an FF sub-layer, while using fS→VInter for the textual branch which consists of a self-attention sub-layer, one unidirectional cross-attention sub-layer, and an FF sub-layer. Finally, SSRPCross uses an inter-modality bidirectional cross-attention encoder fV↔SInter , where both textual and visual features attend to each other. Following [17], each layer in fV↔SInter consists of two self-attention sub-layers, one bi-directional cross-attention sub-layer, and two FF sub-layers. Relationship probing. The purpose of the relationship probing is to model the implicit relations among visual or textual entities. Specifically, we build a latent relationship graph Gv for the objects in an image and a latent relationship graph Gw for the tokens in a caption, based on the unmasked contextual object representations V = {v1, . . . ,vNv} and token representations W = {w1, . . . ,wNw}, which are the output feature vectors of the inter-modality encoders. Inspired by [26], we use a visual probe and a textual probe to compute the distances for each object pair (vi,vj) ∈ Gv and each token pair (wi,wj) ∈ Gw, respectively. The distance for an object/token pair is defined as: dBu(ui,uj) 2 = (Bu(ui − uj))T (Bu(ui − uj)) (2) where u ∈ {v,w}, i and j are the object/token indices, and Bu are the parameters for the probe layer. The learning goal of a structural probe (Sec. 3.3) is to determine the edge distances between all pairs of nodes. The outputs of the visual probe and the textual probe layer are respectively the distance matrices Rv = (dBv (vi,vj) 2) ∈ RNv×Nv and Rw = (dBw(wi,wj)2) ∈ RNw×Nw , which capture implicit relations between visual/textual entities. 3.3 Learning We employ two learning stages in our method. In the first stage, we train the BERT encoders including the intra-modality encoders and the inter-modality encoders to obtain the contextual object representations V and the token representations W . In the second stage, with these contextual representations, we freeze the BERT encoders and train the two probe layers to generate implicit relationship matrices Rv and Rw. Fig. 2 shows a schematic diagram of our learning framework. 3.3.1 Stage 1: Training BERT encoders Masked language modeling with RoI feature reconstruction. We train the BERT encoders with the MLM objective to predict masked RoI feature vi and masked token wj given their surroundings I\i and S\j . We also include a L1 reconstruction smoothing loss [48] for the grounding of visual features. We minimize the following loss: LMLM = −EI,S∼D [ log p(vi|I\i, S̃) + log p(wj |S\j , Ĩ)− ∑ i L1(vi − g(vi|I\i, S̃)) ] (3) where Ĩ and S̃ are the image regions and input words with random masking, g(.) outputs the unmasked visual feature, p(vi|I\i, S̃) and p(wj |S\j , Ĩ) are respectively the predicted probabilities for the target object label and word given the masked inputs, and I and S are sampled from the training set D. Note that here we reuse the symbols v and w to represent both the visual features and the label/word for simplicity. Image-text matching. An additional loss is added to perform the instance-level alignment between an image and its caption. Both positive (y = 1) and negative (y = 0) image-sentence pairs are sampled and the model learns to align with a binary cross-entropy loss: LMatch = −EI,S∼D[y log p(falign) + (1− y) log(1− p(falign))] (4) where p(falign) is the output probability of a binary classifier and falign is the visual-textual alignment representation. For SSRPShare and SSRPVisual, falign is computed as galign([v̄;wCLS]), where v̄ =∑ i vi/Nv is the visual representation averaged over the contextual features of all the visual elements V , wCLS is the contextual representation of the special token [CLS], and galign(·) is a non-linear mapping function (see supplementary for details). For SSRPCross, we define falign = galign(wCLS). Essentially, we force wCLS to model either the aggregated textual or visual-textual information. The overall training loss for the first-stage pretraining becomes: LStage1 = LMLM + LMatch. 3.3.2 Stage 2: Training relationship probes In the second stage, the relationship probe layers are learned via a probe loss LSProbe and a contrastive lossLCL-all, where the former is to ensure the learned textual relationships Rw is structurally consistent with a dependency tree and the latter is to ensure that the learned relationships Rv and Rw remain stable across different data augmentations. In particular, on the language side, we use a pre-parsed dependency tree Gw for each sentence [49] to guide the textual relationship probe learning with LSProbe defined as: LSProbe = 1 N2w ∑ i,j |dGw(wi,wj)− dBw(wi,wj)2| (5) where dGw(wi,wj) is the distance between tokens wi and wj in the dependency tree Gw. For the contrastive loss, we adopt stochastic data augmentation methods to transform an original image (or sentence) into semantics-preserving data samples, and treat them as positive pairs; see Fig. 2, where Ii ∼ TI and Si ∼ TS denote image and sentence augmentations, respectively.1 For the data augmentation details, please refer to Sec. 4.1. Specifically, we sample a minibatch of Nc image-caption pairs and apply two separate augmentation strategies to each modality, resulting in 2Nc image-caption pairs. For every positive pair, its negative pairs are not sampled explicitly, but 1Note that in the interest of coherence, we describe data augmentation with contrastive learning in Stage 2, the augmented data can be used to train BERT encoders in Stage 1. instead we take the other 2(Nc − 1) augmented image-caption pairs within a minibatch as negatives. We adapt the contrastive loss introduced in [50, 51] to our cross-modal scenario. The single-modality contrastive loss LSCL(i, j) and cross-modality contrastive loss LXCL(i, j) for a positive image-caption pair 〈{Ii, Ij}, {Si, Sj}〉 are defined as: LSCL(i, j) = − log eZ v,v i,j∑2Nc k=1 1[k 6=i]e Zv,vi,k − log e Zw,wi,j∑2Nc k=1 1[k 6=i]e Zw,wi,k (6) LXCL(i, j) = − ∑ m∈{i,j} ∑ n∈{i,j} ( log ( eZv,wm,n∑2Nc k=1 1[k 6=m]e Zv,wm,k ) + log ( eZw,vm,n∑2Nc k=1 1[k 6=m]e Zw,vm,k )) (7) where 1[k 6=i] ∈ {0, 1} is an indicator function,Zx,yi,j = ((zxi >zyj )/(‖zxi ‖‖z y j ‖))/τ denotes the cosine similarity between zxi and z y j , z v and zw are the nonlinear projections of vectorized relationship matrices Rv and Rw projected using MLP projection head [50], and τ is a temperature hyperparameter [52]. The final loss is computed across all positive image-caption pairs in a mini-batch LCL-all = 12Nc ∑ i,j [LSCL(i, j) +LSCL(j, i) +LXCL(i, j)]. Note that LXCL is invariant to the order of sample indices (i, j) and thus is included just once in LCL-all. In this stage, the overall training objective is: LStage2 = LSProbe + LCL-all. 4 Experiments 4.1 Datasets and implementation details Pretraining corpus. To enlarge the training data, recent VL pretraining works [17, 16, 53, 18] use combined pretraining corpora such as Conceptual Captions (CC) [54], SBU captions [55], MSCOCO [56, 57, 58], Flickr30K [59], VQA [1], GQA [2], VG [5], BooksCorpus (BC) [60], and English Wikipedia (EW), etc. In contrast, we only aggregate pretraining data from the train (113k) and validation (5k) splits of MSCOCO [58]. Specifically, with each MSCOCO image associated with five independent caption annotations, MSCOCO provides us an aligned VL dataset of 591K image-and-sentence pairs on 118K distinct images. Table 1 summarizes the corpus used by different pretraining methods. Data augmentation. Instead of combining the existing VL datasets, we expand the pretraining corpus with data augmentation on both images and sentences, as shown in Table 2. For data augmentation on images, we employ horizontal flipping (HFlip) at the image level and a few augmentations at the RoI feature level including HFlip, rotations (90o, 180o, and 270o) and bounding box jittering (with scale factors selected from the range of [0.8, 1.2]). We enrich the training sentences through two pretrained back-translators [61]: English→German→English (En-De-En) and English→Russian→English (EnRu-En). Our augmentation strategies can generate significantly more training samples: 1.65M at RoI level and 1.77M at sentence level, while largely preserving the semantic information. Pretraining setting. We pretrain our three SSRP variants shown in Fig. 1. We set the numbers of layers for the intra-modality encoders of fS↔SIntra and f V↔V Intra to 9 and 5, respectively, and the number of layers for the inter-modality encoders of fV SInter, f S→V Inter , and f V↔S Inter to 5. For each transformer block, we set its hidden size to 768 and the number of heads to 12. To keep the sizes the same for the relationship matrices, the maximum numbers of words and objects are equally set to 36. Pretraining is divided into two stages. In stage 1, we train with LStage 1. At each iteration, we randomly mask input words and RoIs with a probability of 0.15. All models are initialized with BERT pretrained weights and the respective pretraining corpus is listed in Table 2. For cross-modality matching, we replace each sentence with a mismatched one with a probability of 0.5. We use Adam optimizer [62] with a linear learning-rate schedule [13] and a peak learning rate of 1e−4. The training is carried out with four Tesla V100 GPUs with a batch size of 128 for 10 epochs. After stage 1, we freeze the parameters of the intra-modality and inter-modality encoders and further train the relationship probes with LStage 2. The syntactic dependency tree for each sentence is built by [49]. All variants of SSRP are trained for 30 epochs with Adam, a batch size of 512, and a learning of 5e−5. Fine-Tuning tasks. We fine-tune the pretrained models to handle multiple downstream tasks: three VL understanding tasks (NLVR2 [63], VQA [1], and GQA [2]) and a generation task (image captioning), following the standard fine-tuning settings for downstream tasks in [17, 53]. For VL understanding tasks, we use linearly-fused probed relationships and visual-textual alignment prediction falign in Eq. 4 as features. For image captioning, we utilize the Up-Down [64] framework and incorporate the refined object features learned by SSRPVisual. The captioning model is first trained with cross-entropy loss and is then followed by reinforcement learning loss [65]. 4.2 Experimental results & analysis We first perform ablation experiments over a few design choices of our method on NLVR2. We then show the comparison results on VQA, GQA and image captioning tasks. Effect of data augmentation. Table 3 shows the ablation study results. For the ‘Raw’ setting, we pretrain our models only on the original corpus, while in the ‘Aug.’ setting, we augment the original corpus with the augmentation techniques mentioned in Table 2. It is evident that our data augmentation strategy indeed improves the performance of all three models. Note that we employ data augmentation only during pretraining, but not during fine-tuning. Effect of attention. Comparing the three variants that use different attention settings in Table 3, we observe that SSRPCross performs the best, and SSRPVisual is better than SSRPShare. This confirms the benefits of the cross-attention structures that enable the features of one modality to attend to the other. Effect of relationship probing. To analyze the effectiveness of the visual and textual relationships learned via pretraining, we concatenate the visual-textual alignment representation falign and relationships (Rel.) to form a relationshipaware feature vector for answer prediction. Table 3 shows that using language relationships Rw leads to better results than using visual relationships Rv. This is due to the available dependency tree for supervising the language model during training, while the visual relationships are learned in a completely self-supervised way. Combining visual and textual relationships achieves the best results. Our method SSRPCross (75.71) outperforms LXMERT (74.9) and VisualBERT (67.4) on NLVR2 dev-set, demonstrating that the probed relationships are beneficial for the reasoning task. Results on VQA& GQA. Table 4 shows the performance of our SSRPCross on VQA and GQA. Our method outperforms VilBERT and VisualBERT, while being highly competitive with the best method that is trained with considerably larger training corpora. Results on image captioning. Unlike the recent VL pretraining methods, which cannot be applied to single-modality vision tasks such as image captioning due to the cross attention used in pretraining, our SSRPShare and SSRPVisual models do not have such a limitation. Thus, we apply the stronger model SSRPVisual to image captioning using its refined object features and the learned implicit visual relationships. Table 5 shows the quantitative results, where SSRPVisual outperforms the baselines, indicating that the learned relationship-aware image representations can benefit image captioning. Note that the online results of BUTD are achieved with model ensemble, while we use a single model. Results on the online MSCOCO test server BUTD [64] (c5) 80.2 36.9 27.6 117.9 – SSRPVisual (c5) 81.5 37.5 28.3 119.8 – BUTD [64] (c40) 95.2 68.5 36.7 120.5 – SSRPVisual (c40) 95.3 68.6 37.2 122.4 – V is ua l D ep en de nc ie s (S SR P C ro ss ) E n- D e- E n E n- R u- E n Or iginal Image Image (HFlip) Or iginal Image RoI (Jitter ) Or iginal Image RoI (HFlip)Or iginal Image RoI (Rotate) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Textual Dependencies (SSRPCross) Textual Dependencies (Gold)Textual Dependencies (SSRPCross) Textual Dependencies (Gold) Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse SSRPCross Gold Parse Gold Parse SSRPCrossSSRPCross Figure 3: Examples of generated relationships for different augmented images and sentences. The bottom part shows the dependency trees resulted from SSRPCross outputs. Black edges above each sentence are the gold tree provided by Stanza [49], and red edges are provided by our SSRPCross. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel. Obj . Obj .+Rel.Obj . Obj .+Rel. Obj . Obj .+Rel. G iv en Q ue ry I m ag es R et ri ev ed I m ag es ? To p- 2? COCO_val2014_000000485129 COCO_val2014_000000361238COCO_val2014_000000243134 COCO_val2014_000000335089 COCO_val2014_000000099119 COCO_val2014_000000347210 Figure 4: A visualization of the retrieved images on MSCOCO validation set. The ‘Obj.’ method averages object features and computes the cosine similarities between images. The ‘Obj. + Rel.’ method enhances the object features according to the predicted relationships. What do probes learn during training? To answer that, we visualize in Fig. 3 the heat-maps of a few relationship examples generated by SSRPCross, where a darker color indicates a closer relationship. Particularly, the first row shows the example images and their augmented counterparts, each of which contains objects and their probed visual relationships represented by straight lines with varying color intensity values. The second row presents the visual relationship distance graphs for the corresponding images. The bottom rows show the distance graphs and dependency trees for augmented captions. Fig. 3 shows that the probed dependency trees closely resemble the gold dependency trees. In addition, the distance graphs of the original data samples and their augmented counterparts for sentences and images are also close to each other, validating our assumption that the visual/linguistic relationships should be preserved even when data augmentation is applied. Remarkably, the learned implicit relationships between objects are stable across differently augmented images, despite the fact that no gold visual relationships are provided in training. Are visual relationships useful for visual tasks? To further verify the benefits of implicit visual relationships in single-modality visual tasks, we perform image retrieval on MSCOCO with SSRPVisual. Fig. 4 shows the top-2 image retrieval results. As shown, ‘Obj. + Rel.’ retrieves better visuallymatching images that are consistent with the object relationships in query images. For example, in the third example, the person in the top-1 retrieved image is next to a pizza, similar to the original image. This suggests that our model can capture the complex underlying visual relationships. 5 Conclusion We have proposed a self-supervised visual relationship probing method that implicitly learns visual relationships without training on ground-truth relationship annotations. Our method transfers the textual relationships from image descriptions to image objects and explores the visual relationships by maximizing the agreement between differently augmented images via contrastive learning. Through our relationship probes, we have demonstrated that relationship structures in images and sentences can be well explored with well-designed distance and contrastive learning objectives. We believe such implicit relationships in images and languages can help improve many existing vision-language tasks, especially in the scenarios with limited annotations. Broader Impact Current representation learning models such as BERT and alike follow a similar structure. We think it is important to discover or probe the implicit knowledge that these models capture about language and vision. Our research on self-supervised relationship probing is a push in that direction and can be used for grounding the relationships expressed in language. In this paper, we introduce SSRP, a self-supervised relationship probing method for visual and textual relationship extraction. Our research could be used to enrich the current scene graph generation methods and to complete the missing relationships between objects. The visual relationships generated by our method could be applied to a wide range of vision and vision-language applications including image captioning, image retrieval, object detection, visual question answering, visual reasoning, and visual-textual cross-modal retrieval, etc. Here, we discuss the broader impact on the two important example applications (image retrieval and image captioning) which can benefit greatly from the implicit relationships obtained with our method. By performing image retrieval using the implicit visual relationships discovered with our method, visual search engines can provide higher-quality results that better respect the visual relationships contained in query images to users. This provides a smoother visual search experience and helps users find their desired images. On the other hand, for image captions/descriptions, with the implicit visual relationships generated by our method, richer and improved descriptions of images that more accurately describe the scenes in images can be obtained. This can help blind or visually-impaired people [66] ‘see’ their surrounding environments better. In terms of technical impacts, our method opens a new direction to better model visual object relationships, which is completely different from current visual relation models that heavily rely on human-annotated explicit visual relation labels. Annotating visual relationships is a highly subjective process where different annotators are likely to annotate quite differently. Relations are also very diverse and there is no clear definition. Our approach bypasses all these challenges of annotating relations by advocating to discover rich implicit relations directly from natural images and their textual descriptions in a self-supervised manner without using any explicit relation annotations. Thus, our method leads to richer and fairer visual relation model. In addition, in terms of dataset, our method also goes beyond current pretraining models that prefer to combine more and more datasets together for self-supervised training. Instead, our proposed method is developed specifically to work effectively with augmented data that can be cheaply obtained with the proposed augmentation strategies and can be nicely integrated into the self-supervision objectives. Overall, our method makes VL pretraining and visual relationship modeling more accessible to the masses.
1. What is the main contribution of the paper in the field of vision and language pretraining? 2. What are the strengths of the proposed method, particularly in terms of its novelty and improvement over existing methods? 3. Are there any concerns or weaknesses in the paper, especially regarding the effectiveness of relationship probing and the comparison with other models? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions for additional experiments or comparisons to further validate the effectiveness of the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper introduces a new and fresh idea compared to a lot of minor ablations that we are seeing in V&L pretraining domain. A self-supervised method is introduced which can implicitly learn the visual relationships in an image without relying on any ground truth visual relationship annotations thus breaking through the curse limited annotated visual relationship data which also has a long tail distribution problem. The method builds the intra- and inter-modality encodings separately and then uses relationship probing (contrastive learning loss and dependency tree) to discover the visual relationships between each modality. The method shows impressive results on multiple datasets and can be used by approaches which require vision-only embedding such as image captioning and improves these as well. Strengths -The method is novel and fresh and improves the existing object feature embeddings by building implicit visual relationship knowledge using self-supervised learning -Relationship probing further helps in improving the MLM pretraining representations. -The paper also introduces data augmentation techniques (though not novel) to gather more data for pretraining from the only source of COCO. -Results suggest that probing and the data augmentation both are useful. -Models intra- and inter-modality encodings separately which allows using the encodings in tasks which require inputs from only one modality. -Enhanced features help with image captioning task as well which improves the metrics compared to the original BUTD model when using them instead of original Faster RCNN ones. -The results on image retrieval and the annotated visual relationships are amazing given that they are trained in self-supervised way, Weaknesses I don’t have many concerns with this paper but I have some high-level issues that I believe should be addressed. - The effect of relationship probing hasn’t been studied independent of MLM training. Do we even need MLM or can be just get away with relationship probing. - The results on GQA are somewhat surprising compared to LXMERT. GQA is a task which should have better numbers with a better visual relationship understanding as the task depends on the scene graph itself. I understand that corpus for other datasets is larger, but can be know the number compared to VisualBERT or LXMERT only trained on COCO to clearly understand the actual impact. -To actually understand and for fair comparison, the number of parameters should also be compared between the different baselines and SSRP. - It would be good to have metrics on actual retrieval tasks or zero-shot caption retrieval to see how good the model is quantitatively along with qualitative results. -To understand the actual impact on the downstream tasks and the quality of the learned representations, it would make sense to test on low-resource tasks such as Hateful Memes dataset, OKVQA, TextVQA, TextCaps and nocaps for captioning. The current downstream task settings in the paper are data intensive and might not be capturing the full power of the model
NIPS
Title Low-rank Optimal Transport: Approximation, Statistics and Debiasing Abstract The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The lowrank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT. 1 Introduction Optimal transport (OT) is used across data-science to put in correspondence different sets of observations. These observations may come directly from datasets, or, in more advanced applications, depict intermediate layered representations of data. OT theory provides a single grammar to describe and solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...), making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al. [2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020], Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020]. Regularized approaches to OT. Solving OT problems at scale poses, however, formidable challenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete measures of size n is a linear program that requires O(n3 log n) operations to be solved. A second and equally important challenge lies in the estimation of OT in high-dimensional settings, since it suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for faster algorithms [Scetbon and Cuturi, 2020, Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they typically scale quadratically in the number of observations. Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these 36th Conference on Neural Information Processing Systems (NeurIPS 2022). low-rank approaches solve simultaneously both an optimal clustering/aggregation strategy with the computation of an effective transport. This intuition rests on an explicit factorization of couplings into two sub-couplings. This has several computational benefits, since its computational cost becomes linear in n if the ground cost matrix seeded to the OT problem has itself a low-rank. While these computational improvements, mostly demonstrated empirically, hold several promises, the theoretical properties of these methods are not yet well established. This stands in stark contrast to the Sinkhorn approach, which is comparatively much better understood. Our Contributions. The goal of this paper is to advance our knowledge, understanding and practical ability to leverage low-rank factorizations in OT. This paper provides five contributions, targeting theoretical and practical properties of LOT: (i) We derive the rate of convergence of the low-rank OT to the true OT with respect to the non-nnegative rank parameter. (ii) We make a first step towards a better understanding of the statistical complexity of LOT by providing an upper-bound of the statistical error, made when estimating LOT using the plug-in estimator; that upper-bound has a parametric rate O( p 1/n) that is independent of the dimension. (iii) We introduce a debiased version of LOT: as the Sinkhorn divergence [Feydy et al., 2018], we show that debiased LOT is nonnegative, metrizes the weak convergence, and that it interpolates between the maximum mean discrepancy [Gretton et al., 2012] and OT. (iv) We exhibit links between the bias induced by the low-rank factorization and clustering methods. (v) We propose practical strategies to tune the step-length and the initialization of the algorithm in [Scetbon et al., 2021]. Notations. We consider (X , dX ) and (Y, dY) two nonempty compact Polish spaces and we denote M + 1 (X ) (resp. M + 1 (Y)) the space of positive Radon probability measures on X (resp. Y). For all n 1, we denote n the probability simplex of size n and ⇤n the subset of n of positive histograms. We write 1n , (1, . . . , 1)T 2 Rn and we denote similarly k · k2 the Euclidean norm and the Euclidean distance induced by this norm depending on the context. 2 Background on Low-rank Optimal Transport Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and c : X ⇥ Y ! R+ a nonnegative and continuous function. The Kantorovitch formulation of optimal transport between µ and ⌫ is defined by OTc(µ, ⌫) , min ⇡2⇧(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) , (1) where the feasible set is the set of distributions over the product space X ⇥Y with marginals µ and ⌫: ⇧(µ, ⌫) , ⇡ 2 M+1 (X ⇥ Y) s.t. P1#⇡ = µ, P2#⇡ = ⌫ , with P1#⇡ (resp. P2#⇡), the pushforward probability measure of ⇡ using the projection maps P1(x, y) = x (resp. P2(x, y) = y). When there exists an optimal coupling solution of (1) supported on a graph of a function, we call such function a Monge map. In the discrete setting, one can reformulate the optimal transport problem as a linear program over the space of nonnegative matrices satisfying the marginal constraints. More precisely, let a and b be respectively elements of ⇤n and ⇤ m and let also X , {x1, . . . , xn} and Y , {y1, . . . , ym} be respectively two subsets of X and Y . By denoting µa,X , Pn i=1 ai xi and ⌫b,Y , Pm j=1 bj yj the two discrete distributions associated and writing C , [c(xi, yj)]i,j , the discrete optimal transport problem can be formulated as OTc(µa,X, ⌫b,Y) = min P2⇧a,b hC,P i where ⇧a,b , {P 2 Rn⇥m+ s.t. P1m = a, PT1n = b} . (2) Scetbon et al. [2021] propose to constrain the discrete optimal transport problem to couplings that have a low-nonnegative rank: Definition 1. Given M 2 Rn⇥m+ , the nonnegative rank of M is defined by: rk+(M) , min{q|M = Pq i=1 Ri, 8i, rk(Ri) = 1, Ri 0} . Note that for any M 2 Rn⇥m+ , we always have that rk+(M) min(n,m). For r 1, we consider the set of couplings satisfying marginal constaints with nonnegative-rank of at most r as ⇧a,b(r) , {P 2 ⇧a,b, rk+(P ) r}. The discrete Low-rank Optimal Transport (LOT) problem is defined by: LOTr,c(µa,X, ⌫b,Y) , min P2⇧a,b(r) hC,P i . (3) To solve this problem, Scetbon et al. [2021] show that Problem (3) is equivalent to min (Q,R,g)2C1(a,b,r)\C2(r) hC,Q diag(1/g)RT i , (4) where C1(a, b, r) , n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ (R⇤+)r s.t. Q1r = a,R1r = b o and C2(r) ,n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ Rr+ s.t. QT1n = RT1m = g o . They propose to solve it using a mirror descent scheme and prove the non-asymptotic stationary convergence of their algorithm. While Scetbon et al. [2021] only focus on the discrete setting, we consider here its extension for arbitrary probability measures. Following [Forrow et al., 2019], we define the set of rank-r couplings satisfying marginal constraints by: ⇧r(µ, ⌫) , {⇡ 2 ⇧(µ, ⌫) : 9(µi)ri=1 2 M+1 (X )r, (⌫i)ri=1 2 M+1 (Y)r, 2 ⇤r s.t. ⇡ = rX i=1 iµi⌦⌫i} . This more general definition of LOT between µ 2 M+1 (X ) and ⌫ 2 M + 1 (Y) reads: LOTr,c(µ, ⌫) , inf ⇡2⇧r(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) . (5) Note that this definition of LOTr,c is consistent as it coincides with the one defined in (3) on discrete probability measures. Observe also that ⇧r(µ, ⌫) is compact for the weak topology and therefore the infimum in (5) is attained. See Appendix A for more details. 3 Approximation Error of LOT to original OT as a function of rank Our goal in this section is to obtain a control of the error induced by the low-rank constraint when trying to approximate the true OT cost. We provide first a control of the approximation error in the discrete setting. The proof is given in Appendix B.1. Proposition 1. Let n,m 2, X , {x1, . . . , xn} ⇢ X , Y , {y1, . . . , ym} ⇢ Y and a 2 ⇤n and b 2 ⇤m. Then for 2 r min(n,m), we have that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| kCk1 ln(min(n,m)/(r 1)) Remark 1. Note that this result improves the control obtained in [Liu et al., 2021], where they obtain that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| . kCk1 p nm(min(n,m) r) as we have for any z, z0 1, | ln(z) ln(z0)| |z z0|. It is in fact possible to obtain another control of the approximation error by partitioning the space where the measures are supported. For that purpose let us introduce the notion of entropy numbers. Definition 2. Let (Z, d) a metric space, W ⇢ Z and k 1 an integer. Then by denoting BZ(z, ") , {y 2 Z : d(z, y) "}, we define the k-th (dyadic) entropy number of W as Nk(W , d) , inf{" s.t. 9 z1, . . . , z2k 2 Z : W ⇢ [2 k i=1BZ(zi, ")} . For example, any compact set W of Rd admits finite entropy numbers, and by denoting R , supw2W kwk2, we have Nk(W, k · k2) 4R/2k/d. We obtain next a control of the approximation error of LOTr,c to the true OT cost using entropy numbers (see proof in Appendix B.2). Proposition 2. Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and assume that c is L-Lipschitz w.r.t. x and y. Then for any r 1, we have |LOTr,c(µ, ⌫) OTc(µ, ⌫)| 2Lmax(Nblog2(b p rc)c(X , dX ),Nblog2(b p rc)c(Y, dY)) This results in the following bound for the p-Wasserstein distance for any p 1 on Rd. Corollary 1. Let d 1, p 1, X a compact subspace of Rd and µ, ⌫ 2 M+1 (X ). By denoting R , supx2X kxk2, we obtain that for any r 1, |LOTr,k·kp2 (µ, ⌫) OTk·k p 2 (µ, ⌫)| 4dp (8R2)p rp/2d . As per the Proof of Proposition 2 we can provide a tighter control, assuming a Monge map exists. Corollary 2. Under the same assumptions of Proposition 2 and by assuming in addition that there exists a Monge map solving OTc(µ, ⌫), we obtain that for any r 1, |LOTr,c(µ, ⌫) OTc(µ, ⌫)| LNblog2(r)c(Y, dY) . When X = Y are a subspaces of Rd, a sufficient condition for a Monge map to exists is that either µ or ⌫ is absolutely continuous with respect to the Lebesgue measure and that c is of the form h(x y) where h : X ! R+ is a strictly convex function [Santambrogio, 2015, Theorem 1.17]. Therefore if µ is absolutely continuous with respect to the Lebesgue measure, we obtain for any r 1 and p > 1 |LOTr,k·kp2 (µ, ⌫) OTk·kp2 (µ, ⌫)| 2dp (8R2)p rp/d . 4 Sample Complexity of LOT We now focus on the statistical performance of the plug-in estimator for LOT. In the following we assume that X = Y for simplicity. Given µ, ⌫ 2 M+1 (X ), we denote the empirical measures associated µ̂n , 1n Pn i=1 Xi and ⌫̂n , 1n Pn i=1 Yi , where (Xi, Yi) n i=1 are sampled independently from µ⌦ ⌫. We consider the plug-in estimator defined as LOTr,c(µ̂n, ⌫̂n), and we aim at quantifying the rate at which it converges towards the true low-rank optimal transport cost LOTr,c(µ, ⌫). Before doing so, in the next Proposition we show that this estimator is consistent on compact spaces. The proof is given in Appendix B.3. Proposition 3. Let r 1 and µ, ⌫ 2 M+1 (X ), then LOTr,c(µ̂n, ⌫̂n) !n!+1 LOTr,c(µ, ⌫) a.s. Next we aim at obtaining the convergence rates of our plug-in estimator. In the following Proposition, we obtain a non-asymptotic upper-bound of the statistical error. See Appendix B.4 for the proof. Proposition 4. Let r 1 and µ, ⌫ 2 M+1 (X ). Then, there exists a constant Kr such that for any > 0 and n 1, we have, with a probability of at least 1 2 , that LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n +Krkck1 "r log(40/ ) n + p r log(40/ ) n # . This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards LOTr,c at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, LOTr,c displays better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969]. In addition, that upper bound compares favorably to known results on entropic optimal transport. The rate of entropy regularized OT does not depend on the ambient dimension with respect to n, but carries an exponential dependence in dimension with respect to the regularization parameter " [Mena and Niles-Weed, 2019]. By contrast, the term associated with the nonnegative rank r in our bound has no direct dependence on dimension. Our next aim is to obtain an explicit rate with respect to r and n. In Proposition 4, we cannot control explicitly Kr in the general setting. Indeed, in our proof, we obtain that Kr , 14/mini ⇤i where ( ⇤i ) r i=1 2 ⇤ r are the weights involved in the decomposition of one optimal solution of the true LOTr,c(µ, ⌫). Therefore the control of Kr requires additional assumptions on the optimal solutions of LOTr,c(µ, ⌫). In the following Proposition, we obtain an explicit upper-bound of the plug-in estimator with respect to r and n in the asymptotic regime. Proposition 5. Let r 1, > 0 and µ, ⌫ 2 M+1 (X ). Then there exists a constant Nr, such that if n Nr, then with a probability of at least 1 2 , we have LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n + 77kck1 r log(40/ ) n . Note that one cannot recover the result obtained in Proposition 5 from the one obtained in Proposition 4 as we have that Kr 14r ! r!+1 +1. In order to prove the above result, we use an extension of the McDiarmid’s inequality when differences are bounded with high probability [Kutin, 2002]. See proof in Appendix B.5 for more details. 5 Debiased Formulation of LOT We introduce here the debiased formulation of LOTr,c and show that it is able to distinguish two distributions, metrize the convergence in law and can be used as a new objective in order to learn distributions. We focus next on the debiasing terms involving measures with themselves LOTr,c(µ, µ) in this new divergence, and show that they can be interpreted as defining a new clustering method generalizing k-means for any geometry. 5.1 On the Proprieties of the Debiased Low-rank Optimal Transport When it comes to learn (or generate) a distribution in ML applications given samples, it is crucial to consider a divergence that is able to distinguish between two distributions and metrize the convergence in law. In general, LOTr,c(µ, µ) 6= 0 and the minimum of LOTr,c(⌫, µ) with respect to ⌫ will not necessarily recover µ. In order to alleviate this issue we propose a debiased version of LOTr,c defined for any µ, ⌫ 2 M+1 (X ) as DLOTr,c(µ, ⌫) , LOTr,c(µ, ⌫) 1 2 [LOTr,c(µ, µ) + LOTr,c(⌫, ⌫)] . Note that DLOTr,c(⌫, ⌫) = 0. In the next Proposition, we show that, as the Sinkhorn divergence [Genevay et al., 2018b, Feydy et al., 2018], DLOTr,c interpolates between the Maximum Mean Discrepancy (MMD) and OT. See proof in Appendix B.6. Proposition 6. Let µ, ⌫ 2 M+1 (X ). Let us assume that c is symmetric, then we have DLOT1,c(µ, ⌫) = 1 2 Z X 2 c(x, y)d[µ ⌫]⌦ d[µ ⌫](x, y) . If in addition we assume the c is Lipschitz w.r.t to x and y, then we have DLOTr,c(µ, ⌫) ! r!+1 OTc(µ, ⌫) . Next, we aim at showing some useful properties of the debiased low-rank OT for machine learning applications. For that purpose, let us first recall some definitions. Definition 3. We say that the cost c : X ⇥ X ! R+ is a semimetric on X if for all x, x0 2 X , c(x, x0) = c(x0, x) and c(x, x0) = 0 if and only if x = x0. In addition we say that c has a negative type if 8n 2, x1, . . . , xn 2 X and ↵1, . . . ,↵n 2 R such that Pn i=1 ↵i = 0, Pn i,j=1 ↵i↵jc(xi, xj) 0. We say also that c has a strong negative type if for all µ, ⌫ 2 M+1 (X ), µ 6= ⌫ =) R X 2 c(x, y)d[µ ⌫]⌦ [µ ⌫] < 0. Note that if c has a strong negative type, then c has a negative type too. For example, all Euclidean spaces and even separable Hilbert spaces endowed with the metric induced by their inner products have strong negative type. Also, on Rd, the squared Euclidean distance has a negative type [Sejdinovic et al., 2013]. We can now provide stronger geometric guarantees for DLOTr,c. In the next Proposition, we show that for a large class of cost functions, DLOTr,c is nonnegative, able to distinguish two distributions, and metrizes the convergence in law. The proof is given in Appendix B.8. Proposition 7. Let r 1, and let us assume that c is a semimetric of negative type. Then for all µ, ⌫ 2 M+1 (X ), we have that DLOTr(µ, ⌫) 0 . In addition, if c has strong negative type then we have also that DLOTr,c(µ, ⌫) = 0 () µ = ⌫ and µn ! µ () DLOTr,c(µn, µ) ! 0 . where the convergence of the sequence of probability measures considered is the convergence in law. Observe that when c has strong negative type, ⌫ ! DLOTr,c(⌫, µ) 0 and it admits a unique global minimizer at ⌫ = µ. Therefore, DLOTr,c has desirable properties to be used as a loss. It is also worth noting that, in order to obtain the metrization of the convergence in law, we show the following Proposition. See proof in Appendix B.7. Proposition 8. Let r 1 and (µn)n 0 and (⌫n)n 0 two sequences of probability measures such that µn ! µ and ⌫n ! ⌫ with respect to the convergence in law. Then we have that LOTr,c(µn, ⌫n) ! LOTr,c(µ, ⌫) . 5.2 Low-Rank Transport Bias and Clustering We turn next to the debiasing terms appearing in DLOT and exhibit links between LOT and clustering methods. Indeed, in the discrete setting, the low-rank bias of a probability measure µ defined as LOTk,c(µ, µ) can be seen as a generalized version of the k-means method for any geometry. In the next Proposition we obtain a new formulation of LOTk,c(µ, µ) viewed as a general clustering method on arbitrary metric space. See proof in Appendix B.9. Proposition 9. Let n k 1, X , {x1, . . . , xn} ⇢ X and a 2 ⇤n. If c is a semimetric of negative type, then by denoting C = (c(xi, xj))i,j , we have that LOTk,c(µa,X, µa,X) = min Q hC,Qdiag(1/QT1n)Q T i s.t. Q 2 Rn⇥k+ , Q1k = a . (6) Let us now explain in more details the link between (6) and k-means. When X is a subspace of Rd, c is the squared Euclidean distance and a = 1n, we recover exactly the k-means algorithm. Corollary 3. Let n k 1 and X , {x1, . . . , xn} ⇢ Rd. We have that LOTk,k·k22(µ1n,X, µa,X) = 2 minQ,z1,...,zk nX i=1 kX q=1 Qi,qkxi zqk 2 2 s.t. Q 2 {0, 1} n⇥k, Q1k = 1n . In the general setting, solving LOTk,c(µa,X, µa,X) for a given geometry c, and a prescribed histrogram a offers a new clustering method where the assignment of the points to the clusters is determined by the matrix Q⇤ solution of (6). 6 Computing LOT: Adaptive Stepsizes and Better Initializations We target in this section practical issues that arises when using [Scetbon et al., 2021, Algo.3] to solve (4). Scetbon et al. [2021] propose to apply a mirror descent scheme with respect to the KullbackLeibler divergence which boils down to solve at each iteration k 0 the following convex problem using the Dykstra’s Algorithm [Dykstra, 1983]: (Qk+1, Rk+1, gk+1) , argmin ⇣2C1(a,b,r)\C2(r) KL(⇣, ⇠k) . (7) where (Q0, R0, g0) 2 C1(a, b, r) \ C2(r), ⇠k , (⇠(1)k , ⇠ (2) k , ⇠ (3) k ), ⇠ (1) k , Qk exp( kCRk diag(1/gk)), ⇠ (2) k , Rk exp( kCTQk diag(1/gk)), ⇠ (3) k , gk exp( k!k/g2k) with [!k]i , [QTkCRk]i,i for all i 2 {1, . . . , r}, KL(w, r) , P i wi log(wi/ri) and ( k)k 0 is a sequence of positive step sizes. In the general setting, each iteration of their algorithm requires O(nmr) operations and when the ground cost matrix C admits a low-rank factorization of the form C = ABT where A 2 Rn⇥q and B 2 Rm⇥q with q ⌧ min(n,m), then the total complexity per iteration becomes linear O((n+m)rq). Note that for the squared Euclidean cost on Rd, we have that q = d+ 2. In the following we investigate two practical aspects of the algorithm: the choice of the step sizes and the initialization. Adaptive choice of k. Scetbon et al. [2021] show experimentally that the choice of ( k)k 0 does not impact the solution obtained upon convergence, but rather the speed at which it is attained. Indeed the larger k is, the faster the algorithm will converge. As a result, their algorithm simply relies on a fixed schedule. However, the range of admissible depends on the problem considered and it may vary from one problem to another. Indeed, the algorithm might fail to converge as one needs to ensure at each iteration k of the mirror descent scheme that the kernels ⇠k do not admit 0 entries in order to solve (7) using the Dykstra’s Algorithm. Such a situation can occur when the terms involved in the exponentials become too large which may depend on the problem considered. Therefore, it may be of particular interest for practitioners to have a generic range of admissible values for independently of the considered problem, in order to alleviate parameter tuning issues. We propose to consider instead an adaptive choice of ( k)k 0 along iterations. D’Orazio et al. [2021], Bayandina et al. [2018] have proposed adaptive mirror descent schemes where, at each iteration, the step-size is normalized by the squared dual-norm of the gradient. Applying such a strategy in our case amounts to consider at each iteration k = k (CR diag(1/g), CTQ diag(1/g), D(QTRC)/g2) k21 , (8) where the initial > 0 is fixed. By doing so, we are able to guarantee a lower-bound of the exponential terms involved in the expression of the kernels ⇠k at each iteration and prevent them from having 0 entries. We recommend to set such as global 2 [1, 10], and observe that this range works whatever the problem considered. On the choice of the initialization. As LOTr,c (4) is a non-convex optimization problem, the question of choosing an efficient initialization arises in practice. Scetbon et al. [2021] show experimentally that the convergence of the algorithm does not depend on the initalization chosen if no stopping criterion is used. Indeed, their experimental findings support that only well behaved local minimas are attractive. However, in practice one needs to use a stopping criterion in order to terminate the algorithm. We do observe in many instances that using trivial initializers may result in spurious local minima, which trigger the stopping criterion early on and prevent the algorithm to reach a good solution. Based on various experimentations, we propose to consider a novel initialization of the algorithm. Our initialization aims at being close to a well-behaved local minimum by clustering the input measures. When the measures are supported on Euclidean space, we propose to find r centroids (zi)ri=1 of one of the two input discrete probability measures using k-means and to solve the following convex barycenter problem: min Q,R hCX,Z , Qi+ hCY,Z , Ri "H(Q) "H(R) s.t. Q1n = a, R1n = b, QT1r = RT1r , (9) where CX,Z = (c(xi, zj))i,j , CY,Z = (c(yi, zj))i,j , and H(P ) = P i,j Pi,j(log(Pi,j 1). In practice we fix " = 1/10 and we then initialize LOTr,c using (Q,R) solution of (9) and g , QT1r(= RT1r). Note that (Q,R, g) is an admissible initialization and finding the centroids as well as solving (9) requires O((n + m)r) algebraic operations. Therefore such initialization does not change the total complexity of the algorithm. In the general (non-Euclidean) case, we propose to initialize the algorithm by applying our generalized k-means approach defined in (6) on each input measure where we fix the common marginal to be g = 1r/r. More precisely, by denoting CX,X = (c(xi, xj))i,j and CY,Y = (c(yi, yj))i,j , we initialize the algorithm by solving: Q 2 argmin Q hCX,X , Qdiag(1/QT1n)QT i s.t. Q 2 Rn⇥k+ , Q1k = a, QT1n = 1r/r . R 2 argmin R hCY,Y , Rdiag(1/RT1m)RT i s.t. R 2 Rm⇥k+ , R1k = b, RT1n = 1r/r . (10) Note that again the (Q,R, g) obtained is an admissible initialization and the complexity of solving (10) is of the same order as solving (4), thus the total complexity of the algorithm remains the same. 7 Experiments In this section, we illustrate experimentally our theoretical findings and show how our initialization provide practical improvements. For that purpose we consider 3 synthetic problems and one real world dataset to: (i) provide illustrations on the statistical rates of LOTr,c, (ii) exhibit the gradient flow of the debiased formulation DLOTr,c, (iii) use the clustering method induced by LOTr,c, and (iv) show the effect of the initialization. All experiments were run on a MacBook Pro 2019 laptop. Statistical rates. We aim at showing the statistical rates of the plug-in estimator of LOTr,c. As LOTr,c(µ, µ) 6= 0 and as we do not have access to this value given samples from µ, we consider instead the debiased version of the low-rank optimal transport, DLOTr,c. In figure 1, we show that the empiricial rates match the theoretical bound obtained in Proposition 4. In particular, we show that that these rates does not depend on the dimension of the ground space. Note also that we recover our theoretical dependence with respect to the rank r: the higher the rank, the slower the convergence. Gradient Flows using DLOT. We illustrate here a practical use of DLOT for ML application. In figure 6, we consider Y1, . . . , Yn independent samples from a moon shape distribution in 2D, and by denoting ⌫̂n the empirical measure associated, we show the iterations obtained by a gradient descent scheme on the following optimization problem: min X2Rn⇥2 DLOTr,c(µ1n/n,X, ⌫̂n) . We initialize the algorithm using n = 1000 samples drawn from a Gaussian distribution. We show that the gradient flow of our debiased version is able to recover the target distribution. We also compare it with the gradient flow of the biased version (LOT) and show that it fails to reproduce the target distribution as it is learning a biased one with a low-rank structure. Application to Clustering. In this experiment we show some applications of the clustering method induced by LOTr,c. In figure 3, we consider 6 datasets with different structure and we aim at recovering the clusters using (6) for some well chosen costs. We compare the clusters obtained when considering either the squared Euclidean cost (which amounts at applying the k-means) and the shortest-path distance on the data viewed as a graph. We show that our method is able to recover the clusters on these settings for well chosen costs and therefore the proposed algorithm in Scetbon et al. [2021] can be seen as a new alternative in order to clusterize data. Effect of the Initialization. Our goal here is to show the effect of the initialization. In figure 4, we display the evolution of the cost as well as the value of the stopping criterion along the iterations of the MD scheme solving (4) when considering different initialization. The x-axis corresponds to the total number of algebraic operations. This number is computed at each iteration of the outer loop of the algorithm proposed in Scetbon et al. [2021] and is obtained by computing the complexity of all the operations involved in their algorithm to reach it. We consider this notion of time instead of CPU/GPU time as we do not want to be architecture/machine dependent. Recall also that the stopping criterion introduced in [Scetbon et al., 2021] is defined for all k 1 by k , 1 2k (KL((Qk, Rk, gk), (Qk 1, Rk 1, gk 1)) + KL((Qk 1, Rk 1, gk 1), (Qk, Rk, gk))), where ((Qk, Rk, gk))k 0 is the sequence solution of (7). First, we show that whatever the initialization chosen, the algorithm manages to converge to an efficient solution if no stopping criterion is used. However, the choice of the initialization may impact the termination of the algorithm as some initialization might be too close to some spurious local minima. Indeed, the initial points obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. We show also that the initialization we propose in (9) and (10) are sufficiently far away from bad local minima and allow the algorithm to converge directly toward the desired solution. The right figure of Fig.4 shows two main observations: (i) that the initial point obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. (ii) When initialiazing the algorithm using kmeans methods, we show that our stopping criterion is a decreasing function of time meaning that the algorithm converges directly towards the desired solution. Conclusion. We assembled in this work theoretical and practical arguments to support low-rank factorizations for OT. We have presented two controls: one concerning the approximation error to the true optimal transport and another concerning the statistical rates of the plug-in estimator. The latter is showed to be independent of the dimension, which is of particular interest when studying OT in ML settings. We have motivated further the use of LOT as a loss by introducing its debiased version and showed that it possesses desirable properties: positivity and metrization of the convergence in law. We have also presented the links between the bias induced by such regularization and clustering methods, and studied empirically the effects of hyperparameters involved in the practical estimation of LOT. The strong theoretical foundations provided in this paper motivate further studies of the empirical behaviour of LOT estimator, notably on finding suitable local minima and on improvements on the convergence of the MD scheme using other adaptive choices for step sizes. Acknowledgements. This work was supported by a "Chaire d’excellence de l’IDEX Paris Saclay". The authors would also like to thank Gabriel Peyré and Jaouad Mourtada for enlightening conversations on the topics discussed in this work.
1. What is the focus of the paper regarding low-rank coupling/matrices in optimal transport? 2. What are the strengths of the proposed approach, particularly in terms of theoretical contributions? 3. What are the weaknesses or concerns regarding the paper's results, especially in statistical estimation? 4. Can the authors provide clarification or additional results to address the reviewer's questions? 5. Are there any other relevant results that could be added to enhance the paper's contributions?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This papers is concerned with a model that approximates optimal transport (OT) using the low-rank coupling/matrices. This model in itself has been proposed a couple of years ago and this paper aims at answering important theoretical questions such as approximation error with respect to standard OT and statistical rates of estimation. They introduce a "debiased" version of their estimation, similar to the one proposed for entropic OT and make the link with clustering methods. Their theoretical work is also complemented with additional tricks for improving the numerical efficiency of the method. Strengths And Weaknesses Although one can argue about the usefulness of the model studied by the authors, it is a good paper by its many theoretical contributions exploring the foundations of low-rank approximation of OT. Results are ranging from obvious, easy to non-trivial and the paper will be a reference for other research developments around this model. Strengths: Paper is well written. Several meaningful theoretical results. So far I checked, the proofs are correct (I did not check proof of proposition 5). Weakness: See my question below: Authors must address my question below, if not I'll revise my rating accordingly. Questions My main concern is about the results for statistical estimation. Propositions 4 and 5 is about a one-sided inequality, i.e. there is no absolute value in the left-hand side of Equation in Prop. 4, neither in Equation in Prop. 5. Note that a complete result on statistical estimation is really about both lower and upper bounds. However, the authors only give an upper bound, as written line 123. Although I think it is true, I do not see how to get a lower bound. It is likely I may have missed a result in the paper showing that it is a simple consequence, and it was maybe obvious for the authors, and it would be meaningful to include it. Can the authors clarify their result? As a side remark, it is rather borderline practice that the authors pretend to have a complete result on statistical complexity. Indeed they write after proposition 4: "This result shows that the estimation of LOTr,c is independent of the dimension and can be performed on general compact metric spaces." However the result in its current form is only partial and thus one cannot claim anything on statistical estimation. Indeed, the fluctuations of the opposite quantity may be much larger and dependent on the dimension. This can happen in practice. So what am I misunderstanding here? Others: Is it possible to add the following result. If μ n → μ for the weak-* topology then L O T ( μ n ) → L O T ( μ ) ? proof of proposition 1: the decomposition of pi line 423 in supplementary (btw, there is a typo there) should be explained a bit more. Is it a standard SVD? typos in the supplementary material can be corrected, line 480, line 494. Limitations no particular comments.
NIPS
Title Low-rank Optimal Transport: Approximation, Statistics and Debiasing Abstract The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The lowrank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT. 1 Introduction Optimal transport (OT) is used across data-science to put in correspondence different sets of observations. These observations may come directly from datasets, or, in more advanced applications, depict intermediate layered representations of data. OT theory provides a single grammar to describe and solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...), making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al. [2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020], Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020]. Regularized approaches to OT. Solving OT problems at scale poses, however, formidable challenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete measures of size n is a linear program that requires O(n3 log n) operations to be solved. A second and equally important challenge lies in the estimation of OT in high-dimensional settings, since it suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for faster algorithms [Scetbon and Cuturi, 2020, Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they typically scale quadratically in the number of observations. Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these 36th Conference on Neural Information Processing Systems (NeurIPS 2022). low-rank approaches solve simultaneously both an optimal clustering/aggregation strategy with the computation of an effective transport. This intuition rests on an explicit factorization of couplings into two sub-couplings. This has several computational benefits, since its computational cost becomes linear in n if the ground cost matrix seeded to the OT problem has itself a low-rank. While these computational improvements, mostly demonstrated empirically, hold several promises, the theoretical properties of these methods are not yet well established. This stands in stark contrast to the Sinkhorn approach, which is comparatively much better understood. Our Contributions. The goal of this paper is to advance our knowledge, understanding and practical ability to leverage low-rank factorizations in OT. This paper provides five contributions, targeting theoretical and practical properties of LOT: (i) We derive the rate of convergence of the low-rank OT to the true OT with respect to the non-nnegative rank parameter. (ii) We make a first step towards a better understanding of the statistical complexity of LOT by providing an upper-bound of the statistical error, made when estimating LOT using the plug-in estimator; that upper-bound has a parametric rate O( p 1/n) that is independent of the dimension. (iii) We introduce a debiased version of LOT: as the Sinkhorn divergence [Feydy et al., 2018], we show that debiased LOT is nonnegative, metrizes the weak convergence, and that it interpolates between the maximum mean discrepancy [Gretton et al., 2012] and OT. (iv) We exhibit links between the bias induced by the low-rank factorization and clustering methods. (v) We propose practical strategies to tune the step-length and the initialization of the algorithm in [Scetbon et al., 2021]. Notations. We consider (X , dX ) and (Y, dY) two nonempty compact Polish spaces and we denote M + 1 (X ) (resp. M + 1 (Y)) the space of positive Radon probability measures on X (resp. Y). For all n 1, we denote n the probability simplex of size n and ⇤n the subset of n of positive histograms. We write 1n , (1, . . . , 1)T 2 Rn and we denote similarly k · k2 the Euclidean norm and the Euclidean distance induced by this norm depending on the context. 2 Background on Low-rank Optimal Transport Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and c : X ⇥ Y ! R+ a nonnegative and continuous function. The Kantorovitch formulation of optimal transport between µ and ⌫ is defined by OTc(µ, ⌫) , min ⇡2⇧(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) , (1) where the feasible set is the set of distributions over the product space X ⇥Y with marginals µ and ⌫: ⇧(µ, ⌫) , ⇡ 2 M+1 (X ⇥ Y) s.t. P1#⇡ = µ, P2#⇡ = ⌫ , with P1#⇡ (resp. P2#⇡), the pushforward probability measure of ⇡ using the projection maps P1(x, y) = x (resp. P2(x, y) = y). When there exists an optimal coupling solution of (1) supported on a graph of a function, we call such function a Monge map. In the discrete setting, one can reformulate the optimal transport problem as a linear program over the space of nonnegative matrices satisfying the marginal constraints. More precisely, let a and b be respectively elements of ⇤n and ⇤ m and let also X , {x1, . . . , xn} and Y , {y1, . . . , ym} be respectively two subsets of X and Y . By denoting µa,X , Pn i=1 ai xi and ⌫b,Y , Pm j=1 bj yj the two discrete distributions associated and writing C , [c(xi, yj)]i,j , the discrete optimal transport problem can be formulated as OTc(µa,X, ⌫b,Y) = min P2⇧a,b hC,P i where ⇧a,b , {P 2 Rn⇥m+ s.t. P1m = a, PT1n = b} . (2) Scetbon et al. [2021] propose to constrain the discrete optimal transport problem to couplings that have a low-nonnegative rank: Definition 1. Given M 2 Rn⇥m+ , the nonnegative rank of M is defined by: rk+(M) , min{q|M = Pq i=1 Ri, 8i, rk(Ri) = 1, Ri 0} . Note that for any M 2 Rn⇥m+ , we always have that rk+(M) min(n,m). For r 1, we consider the set of couplings satisfying marginal constaints with nonnegative-rank of at most r as ⇧a,b(r) , {P 2 ⇧a,b, rk+(P ) r}. The discrete Low-rank Optimal Transport (LOT) problem is defined by: LOTr,c(µa,X, ⌫b,Y) , min P2⇧a,b(r) hC,P i . (3) To solve this problem, Scetbon et al. [2021] show that Problem (3) is equivalent to min (Q,R,g)2C1(a,b,r)\C2(r) hC,Q diag(1/g)RT i , (4) where C1(a, b, r) , n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ (R⇤+)r s.t. Q1r = a,R1r = b o and C2(r) ,n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ Rr+ s.t. QT1n = RT1m = g o . They propose to solve it using a mirror descent scheme and prove the non-asymptotic stationary convergence of their algorithm. While Scetbon et al. [2021] only focus on the discrete setting, we consider here its extension for arbitrary probability measures. Following [Forrow et al., 2019], we define the set of rank-r couplings satisfying marginal constraints by: ⇧r(µ, ⌫) , {⇡ 2 ⇧(µ, ⌫) : 9(µi)ri=1 2 M+1 (X )r, (⌫i)ri=1 2 M+1 (Y)r, 2 ⇤r s.t. ⇡ = rX i=1 iµi⌦⌫i} . This more general definition of LOT between µ 2 M+1 (X ) and ⌫ 2 M + 1 (Y) reads: LOTr,c(µ, ⌫) , inf ⇡2⇧r(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) . (5) Note that this definition of LOTr,c is consistent as it coincides with the one defined in (3) on discrete probability measures. Observe also that ⇧r(µ, ⌫) is compact for the weak topology and therefore the infimum in (5) is attained. See Appendix A for more details. 3 Approximation Error of LOT to original OT as a function of rank Our goal in this section is to obtain a control of the error induced by the low-rank constraint when trying to approximate the true OT cost. We provide first a control of the approximation error in the discrete setting. The proof is given in Appendix B.1. Proposition 1. Let n,m 2, X , {x1, . . . , xn} ⇢ X , Y , {y1, . . . , ym} ⇢ Y and a 2 ⇤n and b 2 ⇤m. Then for 2 r min(n,m), we have that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| kCk1 ln(min(n,m)/(r 1)) Remark 1. Note that this result improves the control obtained in [Liu et al., 2021], where they obtain that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| . kCk1 p nm(min(n,m) r) as we have for any z, z0 1, | ln(z) ln(z0)| |z z0|. It is in fact possible to obtain another control of the approximation error by partitioning the space where the measures are supported. For that purpose let us introduce the notion of entropy numbers. Definition 2. Let (Z, d) a metric space, W ⇢ Z and k 1 an integer. Then by denoting BZ(z, ") , {y 2 Z : d(z, y) "}, we define the k-th (dyadic) entropy number of W as Nk(W , d) , inf{" s.t. 9 z1, . . . , z2k 2 Z : W ⇢ [2 k i=1BZ(zi, ")} . For example, any compact set W of Rd admits finite entropy numbers, and by denoting R , supw2W kwk2, we have Nk(W, k · k2) 4R/2k/d. We obtain next a control of the approximation error of LOTr,c to the true OT cost using entropy numbers (see proof in Appendix B.2). Proposition 2. Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and assume that c is L-Lipschitz w.r.t. x and y. Then for any r 1, we have |LOTr,c(µ, ⌫) OTc(µ, ⌫)| 2Lmax(Nblog2(b p rc)c(X , dX ),Nblog2(b p rc)c(Y, dY)) This results in the following bound for the p-Wasserstein distance for any p 1 on Rd. Corollary 1. Let d 1, p 1, X a compact subspace of Rd and µ, ⌫ 2 M+1 (X ). By denoting R , supx2X kxk2, we obtain that for any r 1, |LOTr,k·kp2 (µ, ⌫) OTk·k p 2 (µ, ⌫)| 4dp (8R2)p rp/2d . As per the Proof of Proposition 2 we can provide a tighter control, assuming a Monge map exists. Corollary 2. Under the same assumptions of Proposition 2 and by assuming in addition that there exists a Monge map solving OTc(µ, ⌫), we obtain that for any r 1, |LOTr,c(µ, ⌫) OTc(µ, ⌫)| LNblog2(r)c(Y, dY) . When X = Y are a subspaces of Rd, a sufficient condition for a Monge map to exists is that either µ or ⌫ is absolutely continuous with respect to the Lebesgue measure and that c is of the form h(x y) where h : X ! R+ is a strictly convex function [Santambrogio, 2015, Theorem 1.17]. Therefore if µ is absolutely continuous with respect to the Lebesgue measure, we obtain for any r 1 and p > 1 |LOTr,k·kp2 (µ, ⌫) OTk·kp2 (µ, ⌫)| 2dp (8R2)p rp/d . 4 Sample Complexity of LOT We now focus on the statistical performance of the plug-in estimator for LOT. In the following we assume that X = Y for simplicity. Given µ, ⌫ 2 M+1 (X ), we denote the empirical measures associated µ̂n , 1n Pn i=1 Xi and ⌫̂n , 1n Pn i=1 Yi , where (Xi, Yi) n i=1 are sampled independently from µ⌦ ⌫. We consider the plug-in estimator defined as LOTr,c(µ̂n, ⌫̂n), and we aim at quantifying the rate at which it converges towards the true low-rank optimal transport cost LOTr,c(µ, ⌫). Before doing so, in the next Proposition we show that this estimator is consistent on compact spaces. The proof is given in Appendix B.3. Proposition 3. Let r 1 and µ, ⌫ 2 M+1 (X ), then LOTr,c(µ̂n, ⌫̂n) !n!+1 LOTr,c(µ, ⌫) a.s. Next we aim at obtaining the convergence rates of our plug-in estimator. In the following Proposition, we obtain a non-asymptotic upper-bound of the statistical error. See Appendix B.4 for the proof. Proposition 4. Let r 1 and µ, ⌫ 2 M+1 (X ). Then, there exists a constant Kr such that for any > 0 and n 1, we have, with a probability of at least 1 2 , that LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n +Krkck1 "r log(40/ ) n + p r log(40/ ) n # . This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards LOTr,c at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, LOTr,c displays better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969]. In addition, that upper bound compares favorably to known results on entropic optimal transport. The rate of entropy regularized OT does not depend on the ambient dimension with respect to n, but carries an exponential dependence in dimension with respect to the regularization parameter " [Mena and Niles-Weed, 2019]. By contrast, the term associated with the nonnegative rank r in our bound has no direct dependence on dimension. Our next aim is to obtain an explicit rate with respect to r and n. In Proposition 4, we cannot control explicitly Kr in the general setting. Indeed, in our proof, we obtain that Kr , 14/mini ⇤i where ( ⇤i ) r i=1 2 ⇤ r are the weights involved in the decomposition of one optimal solution of the true LOTr,c(µ, ⌫). Therefore the control of Kr requires additional assumptions on the optimal solutions of LOTr,c(µ, ⌫). In the following Proposition, we obtain an explicit upper-bound of the plug-in estimator with respect to r and n in the asymptotic regime. Proposition 5. Let r 1, > 0 and µ, ⌫ 2 M+1 (X ). Then there exists a constant Nr, such that if n Nr, then with a probability of at least 1 2 , we have LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n + 77kck1 r log(40/ ) n . Note that one cannot recover the result obtained in Proposition 5 from the one obtained in Proposition 4 as we have that Kr 14r ! r!+1 +1. In order to prove the above result, we use an extension of the McDiarmid’s inequality when differences are bounded with high probability [Kutin, 2002]. See proof in Appendix B.5 for more details. 5 Debiased Formulation of LOT We introduce here the debiased formulation of LOTr,c and show that it is able to distinguish two distributions, metrize the convergence in law and can be used as a new objective in order to learn distributions. We focus next on the debiasing terms involving measures with themselves LOTr,c(µ, µ) in this new divergence, and show that they can be interpreted as defining a new clustering method generalizing k-means for any geometry. 5.1 On the Proprieties of the Debiased Low-rank Optimal Transport When it comes to learn (or generate) a distribution in ML applications given samples, it is crucial to consider a divergence that is able to distinguish between two distributions and metrize the convergence in law. In general, LOTr,c(µ, µ) 6= 0 and the minimum of LOTr,c(⌫, µ) with respect to ⌫ will not necessarily recover µ. In order to alleviate this issue we propose a debiased version of LOTr,c defined for any µ, ⌫ 2 M+1 (X ) as DLOTr,c(µ, ⌫) , LOTr,c(µ, ⌫) 1 2 [LOTr,c(µ, µ) + LOTr,c(⌫, ⌫)] . Note that DLOTr,c(⌫, ⌫) = 0. In the next Proposition, we show that, as the Sinkhorn divergence [Genevay et al., 2018b, Feydy et al., 2018], DLOTr,c interpolates between the Maximum Mean Discrepancy (MMD) and OT. See proof in Appendix B.6. Proposition 6. Let µ, ⌫ 2 M+1 (X ). Let us assume that c is symmetric, then we have DLOT1,c(µ, ⌫) = 1 2 Z X 2 c(x, y)d[µ ⌫]⌦ d[µ ⌫](x, y) . If in addition we assume the c is Lipschitz w.r.t to x and y, then we have DLOTr,c(µ, ⌫) ! r!+1 OTc(µ, ⌫) . Next, we aim at showing some useful properties of the debiased low-rank OT for machine learning applications. For that purpose, let us first recall some definitions. Definition 3. We say that the cost c : X ⇥ X ! R+ is a semimetric on X if for all x, x0 2 X , c(x, x0) = c(x0, x) and c(x, x0) = 0 if and only if x = x0. In addition we say that c has a negative type if 8n 2, x1, . . . , xn 2 X and ↵1, . . . ,↵n 2 R such that Pn i=1 ↵i = 0, Pn i,j=1 ↵i↵jc(xi, xj) 0. We say also that c has a strong negative type if for all µ, ⌫ 2 M+1 (X ), µ 6= ⌫ =) R X 2 c(x, y)d[µ ⌫]⌦ [µ ⌫] < 0. Note that if c has a strong negative type, then c has a negative type too. For example, all Euclidean spaces and even separable Hilbert spaces endowed with the metric induced by their inner products have strong negative type. Also, on Rd, the squared Euclidean distance has a negative type [Sejdinovic et al., 2013]. We can now provide stronger geometric guarantees for DLOTr,c. In the next Proposition, we show that for a large class of cost functions, DLOTr,c is nonnegative, able to distinguish two distributions, and metrizes the convergence in law. The proof is given in Appendix B.8. Proposition 7. Let r 1, and let us assume that c is a semimetric of negative type. Then for all µ, ⌫ 2 M+1 (X ), we have that DLOTr(µ, ⌫) 0 . In addition, if c has strong negative type then we have also that DLOTr,c(µ, ⌫) = 0 () µ = ⌫ and µn ! µ () DLOTr,c(µn, µ) ! 0 . where the convergence of the sequence of probability measures considered is the convergence in law. Observe that when c has strong negative type, ⌫ ! DLOTr,c(⌫, µ) 0 and it admits a unique global minimizer at ⌫ = µ. Therefore, DLOTr,c has desirable properties to be used as a loss. It is also worth noting that, in order to obtain the metrization of the convergence in law, we show the following Proposition. See proof in Appendix B.7. Proposition 8. Let r 1 and (µn)n 0 and (⌫n)n 0 two sequences of probability measures such that µn ! µ and ⌫n ! ⌫ with respect to the convergence in law. Then we have that LOTr,c(µn, ⌫n) ! LOTr,c(µ, ⌫) . 5.2 Low-Rank Transport Bias and Clustering We turn next to the debiasing terms appearing in DLOT and exhibit links between LOT and clustering methods. Indeed, in the discrete setting, the low-rank bias of a probability measure µ defined as LOTk,c(µ, µ) can be seen as a generalized version of the k-means method for any geometry. In the next Proposition we obtain a new formulation of LOTk,c(µ, µ) viewed as a general clustering method on arbitrary metric space. See proof in Appendix B.9. Proposition 9. Let n k 1, X , {x1, . . . , xn} ⇢ X and a 2 ⇤n. If c is a semimetric of negative type, then by denoting C = (c(xi, xj))i,j , we have that LOTk,c(µa,X, µa,X) = min Q hC,Qdiag(1/QT1n)Q T i s.t. Q 2 Rn⇥k+ , Q1k = a . (6) Let us now explain in more details the link between (6) and k-means. When X is a subspace of Rd, c is the squared Euclidean distance and a = 1n, we recover exactly the k-means algorithm. Corollary 3. Let n k 1 and X , {x1, . . . , xn} ⇢ Rd. We have that LOTk,k·k22(µ1n,X, µa,X) = 2 minQ,z1,...,zk nX i=1 kX q=1 Qi,qkxi zqk 2 2 s.t. Q 2 {0, 1} n⇥k, Q1k = 1n . In the general setting, solving LOTk,c(µa,X, µa,X) for a given geometry c, and a prescribed histrogram a offers a new clustering method where the assignment of the points to the clusters is determined by the matrix Q⇤ solution of (6). 6 Computing LOT: Adaptive Stepsizes and Better Initializations We target in this section practical issues that arises when using [Scetbon et al., 2021, Algo.3] to solve (4). Scetbon et al. [2021] propose to apply a mirror descent scheme with respect to the KullbackLeibler divergence which boils down to solve at each iteration k 0 the following convex problem using the Dykstra’s Algorithm [Dykstra, 1983]: (Qk+1, Rk+1, gk+1) , argmin ⇣2C1(a,b,r)\C2(r) KL(⇣, ⇠k) . (7) where (Q0, R0, g0) 2 C1(a, b, r) \ C2(r), ⇠k , (⇠(1)k , ⇠ (2) k , ⇠ (3) k ), ⇠ (1) k , Qk exp( kCRk diag(1/gk)), ⇠ (2) k , Rk exp( kCTQk diag(1/gk)), ⇠ (3) k , gk exp( k!k/g2k) with [!k]i , [QTkCRk]i,i for all i 2 {1, . . . , r}, KL(w, r) , P i wi log(wi/ri) and ( k)k 0 is a sequence of positive step sizes. In the general setting, each iteration of their algorithm requires O(nmr) operations and when the ground cost matrix C admits a low-rank factorization of the form C = ABT where A 2 Rn⇥q and B 2 Rm⇥q with q ⌧ min(n,m), then the total complexity per iteration becomes linear O((n+m)rq). Note that for the squared Euclidean cost on Rd, we have that q = d+ 2. In the following we investigate two practical aspects of the algorithm: the choice of the step sizes and the initialization. Adaptive choice of k. Scetbon et al. [2021] show experimentally that the choice of ( k)k 0 does not impact the solution obtained upon convergence, but rather the speed at which it is attained. Indeed the larger k is, the faster the algorithm will converge. As a result, their algorithm simply relies on a fixed schedule. However, the range of admissible depends on the problem considered and it may vary from one problem to another. Indeed, the algorithm might fail to converge as one needs to ensure at each iteration k of the mirror descent scheme that the kernels ⇠k do not admit 0 entries in order to solve (7) using the Dykstra’s Algorithm. Such a situation can occur when the terms involved in the exponentials become too large which may depend on the problem considered. Therefore, it may be of particular interest for practitioners to have a generic range of admissible values for independently of the considered problem, in order to alleviate parameter tuning issues. We propose to consider instead an adaptive choice of ( k)k 0 along iterations. D’Orazio et al. [2021], Bayandina et al. [2018] have proposed adaptive mirror descent schemes where, at each iteration, the step-size is normalized by the squared dual-norm of the gradient. Applying such a strategy in our case amounts to consider at each iteration k = k (CR diag(1/g), CTQ diag(1/g), D(QTRC)/g2) k21 , (8) where the initial > 0 is fixed. By doing so, we are able to guarantee a lower-bound of the exponential terms involved in the expression of the kernels ⇠k at each iteration and prevent them from having 0 entries. We recommend to set such as global 2 [1, 10], and observe that this range works whatever the problem considered. On the choice of the initialization. As LOTr,c (4) is a non-convex optimization problem, the question of choosing an efficient initialization arises in practice. Scetbon et al. [2021] show experimentally that the convergence of the algorithm does not depend on the initalization chosen if no stopping criterion is used. Indeed, their experimental findings support that only well behaved local minimas are attractive. However, in practice one needs to use a stopping criterion in order to terminate the algorithm. We do observe in many instances that using trivial initializers may result in spurious local minima, which trigger the stopping criterion early on and prevent the algorithm to reach a good solution. Based on various experimentations, we propose to consider a novel initialization of the algorithm. Our initialization aims at being close to a well-behaved local minimum by clustering the input measures. When the measures are supported on Euclidean space, we propose to find r centroids (zi)ri=1 of one of the two input discrete probability measures using k-means and to solve the following convex barycenter problem: min Q,R hCX,Z , Qi+ hCY,Z , Ri "H(Q) "H(R) s.t. Q1n = a, R1n = b, QT1r = RT1r , (9) where CX,Z = (c(xi, zj))i,j , CY,Z = (c(yi, zj))i,j , and H(P ) = P i,j Pi,j(log(Pi,j 1). In practice we fix " = 1/10 and we then initialize LOTr,c using (Q,R) solution of (9) and g , QT1r(= RT1r). Note that (Q,R, g) is an admissible initialization and finding the centroids as well as solving (9) requires O((n + m)r) algebraic operations. Therefore such initialization does not change the total complexity of the algorithm. In the general (non-Euclidean) case, we propose to initialize the algorithm by applying our generalized k-means approach defined in (6) on each input measure where we fix the common marginal to be g = 1r/r. More precisely, by denoting CX,X = (c(xi, xj))i,j and CY,Y = (c(yi, yj))i,j , we initialize the algorithm by solving: Q 2 argmin Q hCX,X , Qdiag(1/QT1n)QT i s.t. Q 2 Rn⇥k+ , Q1k = a, QT1n = 1r/r . R 2 argmin R hCY,Y , Rdiag(1/RT1m)RT i s.t. R 2 Rm⇥k+ , R1k = b, RT1n = 1r/r . (10) Note that again the (Q,R, g) obtained is an admissible initialization and the complexity of solving (10) is of the same order as solving (4), thus the total complexity of the algorithm remains the same. 7 Experiments In this section, we illustrate experimentally our theoretical findings and show how our initialization provide practical improvements. For that purpose we consider 3 synthetic problems and one real world dataset to: (i) provide illustrations on the statistical rates of LOTr,c, (ii) exhibit the gradient flow of the debiased formulation DLOTr,c, (iii) use the clustering method induced by LOTr,c, and (iv) show the effect of the initialization. All experiments were run on a MacBook Pro 2019 laptop. Statistical rates. We aim at showing the statistical rates of the plug-in estimator of LOTr,c. As LOTr,c(µ, µ) 6= 0 and as we do not have access to this value given samples from µ, we consider instead the debiased version of the low-rank optimal transport, DLOTr,c. In figure 1, we show that the empiricial rates match the theoretical bound obtained in Proposition 4. In particular, we show that that these rates does not depend on the dimension of the ground space. Note also that we recover our theoretical dependence with respect to the rank r: the higher the rank, the slower the convergence. Gradient Flows using DLOT. We illustrate here a practical use of DLOT for ML application. In figure 6, we consider Y1, . . . , Yn independent samples from a moon shape distribution in 2D, and by denoting ⌫̂n the empirical measure associated, we show the iterations obtained by a gradient descent scheme on the following optimization problem: min X2Rn⇥2 DLOTr,c(µ1n/n,X, ⌫̂n) . We initialize the algorithm using n = 1000 samples drawn from a Gaussian distribution. We show that the gradient flow of our debiased version is able to recover the target distribution. We also compare it with the gradient flow of the biased version (LOT) and show that it fails to reproduce the target distribution as it is learning a biased one with a low-rank structure. Application to Clustering. In this experiment we show some applications of the clustering method induced by LOTr,c. In figure 3, we consider 6 datasets with different structure and we aim at recovering the clusters using (6) for some well chosen costs. We compare the clusters obtained when considering either the squared Euclidean cost (which amounts at applying the k-means) and the shortest-path distance on the data viewed as a graph. We show that our method is able to recover the clusters on these settings for well chosen costs and therefore the proposed algorithm in Scetbon et al. [2021] can be seen as a new alternative in order to clusterize data. Effect of the Initialization. Our goal here is to show the effect of the initialization. In figure 4, we display the evolution of the cost as well as the value of the stopping criterion along the iterations of the MD scheme solving (4) when considering different initialization. The x-axis corresponds to the total number of algebraic operations. This number is computed at each iteration of the outer loop of the algorithm proposed in Scetbon et al. [2021] and is obtained by computing the complexity of all the operations involved in their algorithm to reach it. We consider this notion of time instead of CPU/GPU time as we do not want to be architecture/machine dependent. Recall also that the stopping criterion introduced in [Scetbon et al., 2021] is defined for all k 1 by k , 1 2k (KL((Qk, Rk, gk), (Qk 1, Rk 1, gk 1)) + KL((Qk 1, Rk 1, gk 1), (Qk, Rk, gk))), where ((Qk, Rk, gk))k 0 is the sequence solution of (7). First, we show that whatever the initialization chosen, the algorithm manages to converge to an efficient solution if no stopping criterion is used. However, the choice of the initialization may impact the termination of the algorithm as some initialization might be too close to some spurious local minima. Indeed, the initial points obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. We show also that the initialization we propose in (9) and (10) are sufficiently far away from bad local minima and allow the algorithm to converge directly toward the desired solution. The right figure of Fig.4 shows two main observations: (i) that the initial point obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. (ii) When initialiazing the algorithm using kmeans methods, we show that our stopping criterion is a decreasing function of time meaning that the algorithm converges directly towards the desired solution. Conclusion. We assembled in this work theoretical and practical arguments to support low-rank factorizations for OT. We have presented two controls: one concerning the approximation error to the true optimal transport and another concerning the statistical rates of the plug-in estimator. The latter is showed to be independent of the dimension, which is of particular interest when studying OT in ML settings. We have motivated further the use of LOT as a loss by introducing its debiased version and showed that it possesses desirable properties: positivity and metrization of the convergence in law. We have also presented the links between the bias induced by such regularization and clustering methods, and studied empirically the effects of hyperparameters involved in the practical estimation of LOT. The strong theoretical foundations provided in this paper motivate further studies of the empirical behaviour of LOT estimator, notably on finding suitable local minima and on improvements on the convergence of the MD scheme using other adaptive choices for step sizes. Acknowledgements. This work was supported by a "Chaire d’excellence de l’IDEX Paris Saclay". The authors would also like to thank Gabriel Peyré and Jaouad Mourtada for enlightening conversations on the topics discussed in this work.
1. What is the focus and contribution of the paper on low-rank optimal transport? 2. What are the strengths of the proposed algorithm, particularly in terms of computational efficiency and debiasing techniques? 3. What are the weaknesses of the paper regarding its notation, labeling, and explanations? 4. Do you have any concerns about the choice of adaptive step size and its supporting evidence? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The optimal transport (OT) is becoming more and more prominent in machine learning field, however, traditional algorithm such as the linear program has a slow computational speed. In the last decade the entropy-regularized OT (EOT) was proposed and the speed has been improved a lot. This work studies the low-rank OT (LOT) which is an algorithm proposed by Scetbon et al. [2021] that has a promising linear time complexity by searching for the low-cost couplings with low-nonnnegative ranks. The rate of convergence and an dimension independent upper bound of the sample complexity are provided. Furthermore, a debiased version of LOT (DLOT) is proposed, ad the debiasing terms connect LOT to clustering methods. To improve the computation performance adaptive step size and better initializations are introduced, and the effectivenesses are empirically verified by experiments. Strengths And Weaknesses Strengths: This paper extends the LOT work by Scetbon et al. [2021] and studies the theoretical and practical properties of LOT deeply in several aspects. Such complete investigation of an algorithm is essential for bringing in a member into the computational OT family. This work also proposes interesting ideas such as linking the low-rank transport bias to the clustering method, which may inspire other applications and benefit the machine learning community. Weaknesses: The naming for the variables and equations referencing labels are confusing, as a result, sometimes it is hard to follow. The clarity of this paper could be improved and the notations could be used in a more consistent way so the readers can understand the meanings of the plots and equations with less efforts. See questions below for more details. Questions The adaptive step size improves the convergence. The authors also suggest clipping the step size in the range of [1, 10] for most use cases. Could an explanation or evidence be provided to support this choice? In Fig. 1, the notations are confusing. Could the authors choose the variable names more carefully so they are consistent with previous sections? Eg. n is number of samples here and in a few places, but somewhere else n represents dimension. Also, is the dimension d the same as stated in line 203 or is it the dimension of the marginal? In Fig. 1, it seems that the DLOT values of larger r are higher among all d cases. Is there an intuition or explanation? Several references of equations are mislabeled. Eg. in line 187, 192, and 252, in the main text the reference equations are (15) while in the supplements (16). Also, there is not eq. (15) in the main text. Please fix and make them consistent. Minor errors: [1] Line 141-142: "...one obtained in Proposition in 4...", please remove a redundant "in". [2] Line 244: A typo: "...we show the iterates obtained by a gradient descent...", should iterates be "iterations"? [3] Eq. (6): " D i a g " should be "diag" instead. Limitations There is no negative societal impact. The authors address the limitations.
NIPS
Title Low-rank Optimal Transport: Approximation, Statistics and Debiasing Abstract The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The lowrank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT. 1 Introduction Optimal transport (OT) is used across data-science to put in correspondence different sets of observations. These observations may come directly from datasets, or, in more advanced applications, depict intermediate layered representations of data. OT theory provides a single grammar to describe and solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...), making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al. [2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020], Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020]. Regularized approaches to OT. Solving OT problems at scale poses, however, formidable challenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete measures of size n is a linear program that requires O(n3 log n) operations to be solved. A second and equally important challenge lies in the estimation of OT in high-dimensional settings, since it suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for faster algorithms [Scetbon and Cuturi, 2020, Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they typically scale quadratically in the number of observations. Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these 36th Conference on Neural Information Processing Systems (NeurIPS 2022). low-rank approaches solve simultaneously both an optimal clustering/aggregation strategy with the computation of an effective transport. This intuition rests on an explicit factorization of couplings into two sub-couplings. This has several computational benefits, since its computational cost becomes linear in n if the ground cost matrix seeded to the OT problem has itself a low-rank. While these computational improvements, mostly demonstrated empirically, hold several promises, the theoretical properties of these methods are not yet well established. This stands in stark contrast to the Sinkhorn approach, which is comparatively much better understood. Our Contributions. The goal of this paper is to advance our knowledge, understanding and practical ability to leverage low-rank factorizations in OT. This paper provides five contributions, targeting theoretical and practical properties of LOT: (i) We derive the rate of convergence of the low-rank OT to the true OT with respect to the non-nnegative rank parameter. (ii) We make a first step towards a better understanding of the statistical complexity of LOT by providing an upper-bound of the statistical error, made when estimating LOT using the plug-in estimator; that upper-bound has a parametric rate O( p 1/n) that is independent of the dimension. (iii) We introduce a debiased version of LOT: as the Sinkhorn divergence [Feydy et al., 2018], we show that debiased LOT is nonnegative, metrizes the weak convergence, and that it interpolates between the maximum mean discrepancy [Gretton et al., 2012] and OT. (iv) We exhibit links between the bias induced by the low-rank factorization and clustering methods. (v) We propose practical strategies to tune the step-length and the initialization of the algorithm in [Scetbon et al., 2021]. Notations. We consider (X , dX ) and (Y, dY) two nonempty compact Polish spaces and we denote M + 1 (X ) (resp. M + 1 (Y)) the space of positive Radon probability measures on X (resp. Y). For all n 1, we denote n the probability simplex of size n and ⇤n the subset of n of positive histograms. We write 1n , (1, . . . , 1)T 2 Rn and we denote similarly k · k2 the Euclidean norm and the Euclidean distance induced by this norm depending on the context. 2 Background on Low-rank Optimal Transport Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and c : X ⇥ Y ! R+ a nonnegative and continuous function. The Kantorovitch formulation of optimal transport between µ and ⌫ is defined by OTc(µ, ⌫) , min ⇡2⇧(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) , (1) where the feasible set is the set of distributions over the product space X ⇥Y with marginals µ and ⌫: ⇧(µ, ⌫) , ⇡ 2 M+1 (X ⇥ Y) s.t. P1#⇡ = µ, P2#⇡ = ⌫ , with P1#⇡ (resp. P2#⇡), the pushforward probability measure of ⇡ using the projection maps P1(x, y) = x (resp. P2(x, y) = y). When there exists an optimal coupling solution of (1) supported on a graph of a function, we call such function a Monge map. In the discrete setting, one can reformulate the optimal transport problem as a linear program over the space of nonnegative matrices satisfying the marginal constraints. More precisely, let a and b be respectively elements of ⇤n and ⇤ m and let also X , {x1, . . . , xn} and Y , {y1, . . . , ym} be respectively two subsets of X and Y . By denoting µa,X , Pn i=1 ai xi and ⌫b,Y , Pm j=1 bj yj the two discrete distributions associated and writing C , [c(xi, yj)]i,j , the discrete optimal transport problem can be formulated as OTc(µa,X, ⌫b,Y) = min P2⇧a,b hC,P i where ⇧a,b , {P 2 Rn⇥m+ s.t. P1m = a, PT1n = b} . (2) Scetbon et al. [2021] propose to constrain the discrete optimal transport problem to couplings that have a low-nonnegative rank: Definition 1. Given M 2 Rn⇥m+ , the nonnegative rank of M is defined by: rk+(M) , min{q|M = Pq i=1 Ri, 8i, rk(Ri) = 1, Ri 0} . Note that for any M 2 Rn⇥m+ , we always have that rk+(M) min(n,m). For r 1, we consider the set of couplings satisfying marginal constaints with nonnegative-rank of at most r as ⇧a,b(r) , {P 2 ⇧a,b, rk+(P ) r}. The discrete Low-rank Optimal Transport (LOT) problem is defined by: LOTr,c(µa,X, ⌫b,Y) , min P2⇧a,b(r) hC,P i . (3) To solve this problem, Scetbon et al. [2021] show that Problem (3) is equivalent to min (Q,R,g)2C1(a,b,r)\C2(r) hC,Q diag(1/g)RT i , (4) where C1(a, b, r) , n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ (R⇤+)r s.t. Q1r = a,R1r = b o and C2(r) ,n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ Rr+ s.t. QT1n = RT1m = g o . They propose to solve it using a mirror descent scheme and prove the non-asymptotic stationary convergence of their algorithm. While Scetbon et al. [2021] only focus on the discrete setting, we consider here its extension for arbitrary probability measures. Following [Forrow et al., 2019], we define the set of rank-r couplings satisfying marginal constraints by: ⇧r(µ, ⌫) , {⇡ 2 ⇧(µ, ⌫) : 9(µi)ri=1 2 M+1 (X )r, (⌫i)ri=1 2 M+1 (Y)r, 2 ⇤r s.t. ⇡ = rX i=1 iµi⌦⌫i} . This more general definition of LOT between µ 2 M+1 (X ) and ⌫ 2 M + 1 (Y) reads: LOTr,c(µ, ⌫) , inf ⇡2⇧r(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) . (5) Note that this definition of LOTr,c is consistent as it coincides with the one defined in (3) on discrete probability measures. Observe also that ⇧r(µ, ⌫) is compact for the weak topology and therefore the infimum in (5) is attained. See Appendix A for more details. 3 Approximation Error of LOT to original OT as a function of rank Our goal in this section is to obtain a control of the error induced by the low-rank constraint when trying to approximate the true OT cost. We provide first a control of the approximation error in the discrete setting. The proof is given in Appendix B.1. Proposition 1. Let n,m 2, X , {x1, . . . , xn} ⇢ X , Y , {y1, . . . , ym} ⇢ Y and a 2 ⇤n and b 2 ⇤m. Then for 2 r min(n,m), we have that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| kCk1 ln(min(n,m)/(r 1)) Remark 1. Note that this result improves the control obtained in [Liu et al., 2021], where they obtain that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| . kCk1 p nm(min(n,m) r) as we have for any z, z0 1, | ln(z) ln(z0)| |z z0|. It is in fact possible to obtain another control of the approximation error by partitioning the space where the measures are supported. For that purpose let us introduce the notion of entropy numbers. Definition 2. Let (Z, d) a metric space, W ⇢ Z and k 1 an integer. Then by denoting BZ(z, ") , {y 2 Z : d(z, y) "}, we define the k-th (dyadic) entropy number of W as Nk(W , d) , inf{" s.t. 9 z1, . . . , z2k 2 Z : W ⇢ [2 k i=1BZ(zi, ")} . For example, any compact set W of Rd admits finite entropy numbers, and by denoting R , supw2W kwk2, we have Nk(W, k · k2) 4R/2k/d. We obtain next a control of the approximation error of LOTr,c to the true OT cost using entropy numbers (see proof in Appendix B.2). Proposition 2. Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and assume that c is L-Lipschitz w.r.t. x and y. Then for any r 1, we have |LOTr,c(µ, ⌫) OTc(µ, ⌫)| 2Lmax(Nblog2(b p rc)c(X , dX ),Nblog2(b p rc)c(Y, dY)) This results in the following bound for the p-Wasserstein distance for any p 1 on Rd. Corollary 1. Let d 1, p 1, X a compact subspace of Rd and µ, ⌫ 2 M+1 (X ). By denoting R , supx2X kxk2, we obtain that for any r 1, |LOTr,k·kp2 (µ, ⌫) OTk·k p 2 (µ, ⌫)| 4dp (8R2)p rp/2d . As per the Proof of Proposition 2 we can provide a tighter control, assuming a Monge map exists. Corollary 2. Under the same assumptions of Proposition 2 and by assuming in addition that there exists a Monge map solving OTc(µ, ⌫), we obtain that for any r 1, |LOTr,c(µ, ⌫) OTc(µ, ⌫)| LNblog2(r)c(Y, dY) . When X = Y are a subspaces of Rd, a sufficient condition for a Monge map to exists is that either µ or ⌫ is absolutely continuous with respect to the Lebesgue measure and that c is of the form h(x y) where h : X ! R+ is a strictly convex function [Santambrogio, 2015, Theorem 1.17]. Therefore if µ is absolutely continuous with respect to the Lebesgue measure, we obtain for any r 1 and p > 1 |LOTr,k·kp2 (µ, ⌫) OTk·kp2 (µ, ⌫)| 2dp (8R2)p rp/d . 4 Sample Complexity of LOT We now focus on the statistical performance of the plug-in estimator for LOT. In the following we assume that X = Y for simplicity. Given µ, ⌫ 2 M+1 (X ), we denote the empirical measures associated µ̂n , 1n Pn i=1 Xi and ⌫̂n , 1n Pn i=1 Yi , where (Xi, Yi) n i=1 are sampled independently from µ⌦ ⌫. We consider the plug-in estimator defined as LOTr,c(µ̂n, ⌫̂n), and we aim at quantifying the rate at which it converges towards the true low-rank optimal transport cost LOTr,c(µ, ⌫). Before doing so, in the next Proposition we show that this estimator is consistent on compact spaces. The proof is given in Appendix B.3. Proposition 3. Let r 1 and µ, ⌫ 2 M+1 (X ), then LOTr,c(µ̂n, ⌫̂n) !n!+1 LOTr,c(µ, ⌫) a.s. Next we aim at obtaining the convergence rates of our plug-in estimator. In the following Proposition, we obtain a non-asymptotic upper-bound of the statistical error. See Appendix B.4 for the proof. Proposition 4. Let r 1 and µ, ⌫ 2 M+1 (X ). Then, there exists a constant Kr such that for any > 0 and n 1, we have, with a probability of at least 1 2 , that LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n +Krkck1 "r log(40/ ) n + p r log(40/ ) n # . This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards LOTr,c at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, LOTr,c displays better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969]. In addition, that upper bound compares favorably to known results on entropic optimal transport. The rate of entropy regularized OT does not depend on the ambient dimension with respect to n, but carries an exponential dependence in dimension with respect to the regularization parameter " [Mena and Niles-Weed, 2019]. By contrast, the term associated with the nonnegative rank r in our bound has no direct dependence on dimension. Our next aim is to obtain an explicit rate with respect to r and n. In Proposition 4, we cannot control explicitly Kr in the general setting. Indeed, in our proof, we obtain that Kr , 14/mini ⇤i where ( ⇤i ) r i=1 2 ⇤ r are the weights involved in the decomposition of one optimal solution of the true LOTr,c(µ, ⌫). Therefore the control of Kr requires additional assumptions on the optimal solutions of LOTr,c(µ, ⌫). In the following Proposition, we obtain an explicit upper-bound of the plug-in estimator with respect to r and n in the asymptotic regime. Proposition 5. Let r 1, > 0 and µ, ⌫ 2 M+1 (X ). Then there exists a constant Nr, such that if n Nr, then with a probability of at least 1 2 , we have LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n + 77kck1 r log(40/ ) n . Note that one cannot recover the result obtained in Proposition 5 from the one obtained in Proposition 4 as we have that Kr 14r ! r!+1 +1. In order to prove the above result, we use an extension of the McDiarmid’s inequality when differences are bounded with high probability [Kutin, 2002]. See proof in Appendix B.5 for more details. 5 Debiased Formulation of LOT We introduce here the debiased formulation of LOTr,c and show that it is able to distinguish two distributions, metrize the convergence in law and can be used as a new objective in order to learn distributions. We focus next on the debiasing terms involving measures with themselves LOTr,c(µ, µ) in this new divergence, and show that they can be interpreted as defining a new clustering method generalizing k-means for any geometry. 5.1 On the Proprieties of the Debiased Low-rank Optimal Transport When it comes to learn (or generate) a distribution in ML applications given samples, it is crucial to consider a divergence that is able to distinguish between two distributions and metrize the convergence in law. In general, LOTr,c(µ, µ) 6= 0 and the minimum of LOTr,c(⌫, µ) with respect to ⌫ will not necessarily recover µ. In order to alleviate this issue we propose a debiased version of LOTr,c defined for any µ, ⌫ 2 M+1 (X ) as DLOTr,c(µ, ⌫) , LOTr,c(µ, ⌫) 1 2 [LOTr,c(µ, µ) + LOTr,c(⌫, ⌫)] . Note that DLOTr,c(⌫, ⌫) = 0. In the next Proposition, we show that, as the Sinkhorn divergence [Genevay et al., 2018b, Feydy et al., 2018], DLOTr,c interpolates between the Maximum Mean Discrepancy (MMD) and OT. See proof in Appendix B.6. Proposition 6. Let µ, ⌫ 2 M+1 (X ). Let us assume that c is symmetric, then we have DLOT1,c(µ, ⌫) = 1 2 Z X 2 c(x, y)d[µ ⌫]⌦ d[µ ⌫](x, y) . If in addition we assume the c is Lipschitz w.r.t to x and y, then we have DLOTr,c(µ, ⌫) ! r!+1 OTc(µ, ⌫) . Next, we aim at showing some useful properties of the debiased low-rank OT for machine learning applications. For that purpose, let us first recall some definitions. Definition 3. We say that the cost c : X ⇥ X ! R+ is a semimetric on X if for all x, x0 2 X , c(x, x0) = c(x0, x) and c(x, x0) = 0 if and only if x = x0. In addition we say that c has a negative type if 8n 2, x1, . . . , xn 2 X and ↵1, . . . ,↵n 2 R such that Pn i=1 ↵i = 0, Pn i,j=1 ↵i↵jc(xi, xj) 0. We say also that c has a strong negative type if for all µ, ⌫ 2 M+1 (X ), µ 6= ⌫ =) R X 2 c(x, y)d[µ ⌫]⌦ [µ ⌫] < 0. Note that if c has a strong negative type, then c has a negative type too. For example, all Euclidean spaces and even separable Hilbert spaces endowed with the metric induced by their inner products have strong negative type. Also, on Rd, the squared Euclidean distance has a negative type [Sejdinovic et al., 2013]. We can now provide stronger geometric guarantees for DLOTr,c. In the next Proposition, we show that for a large class of cost functions, DLOTr,c is nonnegative, able to distinguish two distributions, and metrizes the convergence in law. The proof is given in Appendix B.8. Proposition 7. Let r 1, and let us assume that c is a semimetric of negative type. Then for all µ, ⌫ 2 M+1 (X ), we have that DLOTr(µ, ⌫) 0 . In addition, if c has strong negative type then we have also that DLOTr,c(µ, ⌫) = 0 () µ = ⌫ and µn ! µ () DLOTr,c(µn, µ) ! 0 . where the convergence of the sequence of probability measures considered is the convergence in law. Observe that when c has strong negative type, ⌫ ! DLOTr,c(⌫, µ) 0 and it admits a unique global minimizer at ⌫ = µ. Therefore, DLOTr,c has desirable properties to be used as a loss. It is also worth noting that, in order to obtain the metrization of the convergence in law, we show the following Proposition. See proof in Appendix B.7. Proposition 8. Let r 1 and (µn)n 0 and (⌫n)n 0 two sequences of probability measures such that µn ! µ and ⌫n ! ⌫ with respect to the convergence in law. Then we have that LOTr,c(µn, ⌫n) ! LOTr,c(µ, ⌫) . 5.2 Low-Rank Transport Bias and Clustering We turn next to the debiasing terms appearing in DLOT and exhibit links between LOT and clustering methods. Indeed, in the discrete setting, the low-rank bias of a probability measure µ defined as LOTk,c(µ, µ) can be seen as a generalized version of the k-means method for any geometry. In the next Proposition we obtain a new formulation of LOTk,c(µ, µ) viewed as a general clustering method on arbitrary metric space. See proof in Appendix B.9. Proposition 9. Let n k 1, X , {x1, . . . , xn} ⇢ X and a 2 ⇤n. If c is a semimetric of negative type, then by denoting C = (c(xi, xj))i,j , we have that LOTk,c(µa,X, µa,X) = min Q hC,Qdiag(1/QT1n)Q T i s.t. Q 2 Rn⇥k+ , Q1k = a . (6) Let us now explain in more details the link between (6) and k-means. When X is a subspace of Rd, c is the squared Euclidean distance and a = 1n, we recover exactly the k-means algorithm. Corollary 3. Let n k 1 and X , {x1, . . . , xn} ⇢ Rd. We have that LOTk,k·k22(µ1n,X, µa,X) = 2 minQ,z1,...,zk nX i=1 kX q=1 Qi,qkxi zqk 2 2 s.t. Q 2 {0, 1} n⇥k, Q1k = 1n . In the general setting, solving LOTk,c(µa,X, µa,X) for a given geometry c, and a prescribed histrogram a offers a new clustering method where the assignment of the points to the clusters is determined by the matrix Q⇤ solution of (6). 6 Computing LOT: Adaptive Stepsizes and Better Initializations We target in this section practical issues that arises when using [Scetbon et al., 2021, Algo.3] to solve (4). Scetbon et al. [2021] propose to apply a mirror descent scheme with respect to the KullbackLeibler divergence which boils down to solve at each iteration k 0 the following convex problem using the Dykstra’s Algorithm [Dykstra, 1983]: (Qk+1, Rk+1, gk+1) , argmin ⇣2C1(a,b,r)\C2(r) KL(⇣, ⇠k) . (7) where (Q0, R0, g0) 2 C1(a, b, r) \ C2(r), ⇠k , (⇠(1)k , ⇠ (2) k , ⇠ (3) k ), ⇠ (1) k , Qk exp( kCRk diag(1/gk)), ⇠ (2) k , Rk exp( kCTQk diag(1/gk)), ⇠ (3) k , gk exp( k!k/g2k) with [!k]i , [QTkCRk]i,i for all i 2 {1, . . . , r}, KL(w, r) , P i wi log(wi/ri) and ( k)k 0 is a sequence of positive step sizes. In the general setting, each iteration of their algorithm requires O(nmr) operations and when the ground cost matrix C admits a low-rank factorization of the form C = ABT where A 2 Rn⇥q and B 2 Rm⇥q with q ⌧ min(n,m), then the total complexity per iteration becomes linear O((n+m)rq). Note that for the squared Euclidean cost on Rd, we have that q = d+ 2. In the following we investigate two practical aspects of the algorithm: the choice of the step sizes and the initialization. Adaptive choice of k. Scetbon et al. [2021] show experimentally that the choice of ( k)k 0 does not impact the solution obtained upon convergence, but rather the speed at which it is attained. Indeed the larger k is, the faster the algorithm will converge. As a result, their algorithm simply relies on a fixed schedule. However, the range of admissible depends on the problem considered and it may vary from one problem to another. Indeed, the algorithm might fail to converge as one needs to ensure at each iteration k of the mirror descent scheme that the kernels ⇠k do not admit 0 entries in order to solve (7) using the Dykstra’s Algorithm. Such a situation can occur when the terms involved in the exponentials become too large which may depend on the problem considered. Therefore, it may be of particular interest for practitioners to have a generic range of admissible values for independently of the considered problem, in order to alleviate parameter tuning issues. We propose to consider instead an adaptive choice of ( k)k 0 along iterations. D’Orazio et al. [2021], Bayandina et al. [2018] have proposed adaptive mirror descent schemes where, at each iteration, the step-size is normalized by the squared dual-norm of the gradient. Applying such a strategy in our case amounts to consider at each iteration k = k (CR diag(1/g), CTQ diag(1/g), D(QTRC)/g2) k21 , (8) where the initial > 0 is fixed. By doing so, we are able to guarantee a lower-bound of the exponential terms involved in the expression of the kernels ⇠k at each iteration and prevent them from having 0 entries. We recommend to set such as global 2 [1, 10], and observe that this range works whatever the problem considered. On the choice of the initialization. As LOTr,c (4) is a non-convex optimization problem, the question of choosing an efficient initialization arises in practice. Scetbon et al. [2021] show experimentally that the convergence of the algorithm does not depend on the initalization chosen if no stopping criterion is used. Indeed, their experimental findings support that only well behaved local minimas are attractive. However, in practice one needs to use a stopping criterion in order to terminate the algorithm. We do observe in many instances that using trivial initializers may result in spurious local minima, which trigger the stopping criterion early on and prevent the algorithm to reach a good solution. Based on various experimentations, we propose to consider a novel initialization of the algorithm. Our initialization aims at being close to a well-behaved local minimum by clustering the input measures. When the measures are supported on Euclidean space, we propose to find r centroids (zi)ri=1 of one of the two input discrete probability measures using k-means and to solve the following convex barycenter problem: min Q,R hCX,Z , Qi+ hCY,Z , Ri "H(Q) "H(R) s.t. Q1n = a, R1n = b, QT1r = RT1r , (9) where CX,Z = (c(xi, zj))i,j , CY,Z = (c(yi, zj))i,j , and H(P ) = P i,j Pi,j(log(Pi,j 1). In practice we fix " = 1/10 and we then initialize LOTr,c using (Q,R) solution of (9) and g , QT1r(= RT1r). Note that (Q,R, g) is an admissible initialization and finding the centroids as well as solving (9) requires O((n + m)r) algebraic operations. Therefore such initialization does not change the total complexity of the algorithm. In the general (non-Euclidean) case, we propose to initialize the algorithm by applying our generalized k-means approach defined in (6) on each input measure where we fix the common marginal to be g = 1r/r. More precisely, by denoting CX,X = (c(xi, xj))i,j and CY,Y = (c(yi, yj))i,j , we initialize the algorithm by solving: Q 2 argmin Q hCX,X , Qdiag(1/QT1n)QT i s.t. Q 2 Rn⇥k+ , Q1k = a, QT1n = 1r/r . R 2 argmin R hCY,Y , Rdiag(1/RT1m)RT i s.t. R 2 Rm⇥k+ , R1k = b, RT1n = 1r/r . (10) Note that again the (Q,R, g) obtained is an admissible initialization and the complexity of solving (10) is of the same order as solving (4), thus the total complexity of the algorithm remains the same. 7 Experiments In this section, we illustrate experimentally our theoretical findings and show how our initialization provide practical improvements. For that purpose we consider 3 synthetic problems and one real world dataset to: (i) provide illustrations on the statistical rates of LOTr,c, (ii) exhibit the gradient flow of the debiased formulation DLOTr,c, (iii) use the clustering method induced by LOTr,c, and (iv) show the effect of the initialization. All experiments were run on a MacBook Pro 2019 laptop. Statistical rates. We aim at showing the statistical rates of the plug-in estimator of LOTr,c. As LOTr,c(µ, µ) 6= 0 and as we do not have access to this value given samples from µ, we consider instead the debiased version of the low-rank optimal transport, DLOTr,c. In figure 1, we show that the empiricial rates match the theoretical bound obtained in Proposition 4. In particular, we show that that these rates does not depend on the dimension of the ground space. Note also that we recover our theoretical dependence with respect to the rank r: the higher the rank, the slower the convergence. Gradient Flows using DLOT. We illustrate here a practical use of DLOT for ML application. In figure 6, we consider Y1, . . . , Yn independent samples from a moon shape distribution in 2D, and by denoting ⌫̂n the empirical measure associated, we show the iterations obtained by a gradient descent scheme on the following optimization problem: min X2Rn⇥2 DLOTr,c(µ1n/n,X, ⌫̂n) . We initialize the algorithm using n = 1000 samples drawn from a Gaussian distribution. We show that the gradient flow of our debiased version is able to recover the target distribution. We also compare it with the gradient flow of the biased version (LOT) and show that it fails to reproduce the target distribution as it is learning a biased one with a low-rank structure. Application to Clustering. In this experiment we show some applications of the clustering method induced by LOTr,c. In figure 3, we consider 6 datasets with different structure and we aim at recovering the clusters using (6) for some well chosen costs. We compare the clusters obtained when considering either the squared Euclidean cost (which amounts at applying the k-means) and the shortest-path distance on the data viewed as a graph. We show that our method is able to recover the clusters on these settings for well chosen costs and therefore the proposed algorithm in Scetbon et al. [2021] can be seen as a new alternative in order to clusterize data. Effect of the Initialization. Our goal here is to show the effect of the initialization. In figure 4, we display the evolution of the cost as well as the value of the stopping criterion along the iterations of the MD scheme solving (4) when considering different initialization. The x-axis corresponds to the total number of algebraic operations. This number is computed at each iteration of the outer loop of the algorithm proposed in Scetbon et al. [2021] and is obtained by computing the complexity of all the operations involved in their algorithm to reach it. We consider this notion of time instead of CPU/GPU time as we do not want to be architecture/machine dependent. Recall also that the stopping criterion introduced in [Scetbon et al., 2021] is defined for all k 1 by k , 1 2k (KL((Qk, Rk, gk), (Qk 1, Rk 1, gk 1)) + KL((Qk 1, Rk 1, gk 1), (Qk, Rk, gk))), where ((Qk, Rk, gk))k 0 is the sequence solution of (7). First, we show that whatever the initialization chosen, the algorithm manages to converge to an efficient solution if no stopping criterion is used. However, the choice of the initialization may impact the termination of the algorithm as some initialization might be too close to some spurious local minima. Indeed, the initial points obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. We show also that the initialization we propose in (9) and (10) are sufficiently far away from bad local minima and allow the algorithm to converge directly toward the desired solution. The right figure of Fig.4 shows two main observations: (i) that the initial point obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. (ii) When initialiazing the algorithm using kmeans methods, we show that our stopping criterion is a decreasing function of time meaning that the algorithm converges directly towards the desired solution. Conclusion. We assembled in this work theoretical and practical arguments to support low-rank factorizations for OT. We have presented two controls: one concerning the approximation error to the true optimal transport and another concerning the statistical rates of the plug-in estimator. The latter is showed to be independent of the dimension, which is of particular interest when studying OT in ML settings. We have motivated further the use of LOT as a loss by introducing its debiased version and showed that it possesses desirable properties: positivity and metrization of the convergence in law. We have also presented the links between the bias induced by such regularization and clustering methods, and studied empirically the effects of hyperparameters involved in the practical estimation of LOT. The strong theoretical foundations provided in this paper motivate further studies of the empirical behaviour of LOT estimator, notably on finding suitable local minima and on improvements on the convergence of the MD scheme using other adaptive choices for step sizes. Acknowledgements. This work was supported by a "Chaire d’excellence de l’IDEX Paris Saclay". The authors would also like to thank Gabriel Peyré and Jaouad Mourtada for enlightening conversations on the topics discussed in this work.
1. What are the main contributions and strengths of the paper regarding low-rank factorizations for OT? 2. What are the weaknesses or concerns regarding the practicality of LOT, particularly in terms of computational benefits and spurious local minima? 3. How does the paper improve upon previous results in terms of bounds for approximation error and sample complexity? 4. Can you provide more details or explanations regarding the debiased formulation of LOT and its connection to clustering? 5. Are there any missing experiments or empirical evidence that could support the benefits gained by low-rank approximation? 6. In what applications can LOT be efficient when the ground cost matrix admits a low-rank factorization?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work advances the theory of low-rank factorizations for OT by studying the approximation error as a function of rank and the sample complexity of LOT. It additionally proposes the debiased formulation, DLOT, which is shown to interpolate between MMD and OT, and that it metrizes weak convergence. Additional connection to clustering is drawn and better practices of using adaptive stepsizes and better initializations are suggested. Experiments are done to support the claims in 2D synthetic examples as well as on the Newsgroup20 dataset. Strengths And Weaknesses This paper is well-written and easy to follow. Although the contributions are a bit all over the place regarding LOT, they are clearly stated and adequately justified. On the theory side, the paper provides comprehensive bounds for approximation error and sample complexity while improving the bounds from previous results (e.g. Liu et al. 2021). Then the debiased formulation of LOT is shown to exhibit desirable properties similar to Sinkhorn divergence. The connection to clustering is interesting since it is specific to the low-rank approximation, something that full-rank versions cannot do. While I do not find any of the results surprising or groundbreaking, they are solid and much needed for future research on LOT. The experiments section is short but verifies part of the theory. There are some missing experiments justifying the adaptive choice of γ k versus without adaptation. Parts of the figures and captions could be improved --- see detailed comments below. A central question I have regarding the practicality of LOT: Is the computational benefit of LOT worth the introduction of nonconvexity and spurious local minima? I would hope to see more experiments (at least empirically) on demonstrating the benefits gained by low-rank approximation and advice on which r to choose. It seems to me that LOT is only efficient when the ground cost matrix admits a low-rank factorization. In what applications is such condition met? Comments: Line 99: "obtain next a control the approximation", missing "of"? Line 126: sample complexity shows promises since it does not depend on dimension - but wouldn't | | c | | ∞ in Proposition 4 depend in the sense that in many applications the diameter of X could increase exponentially in d ? Also as discussed in the paragraph below Proposition 4, K r could go to infinity. Line 187, Line 192: (15) does not exist. Do you mean (6)? Line 238: "we do not have access to the this", no "the" here Line 252: (15) should be (6)? Figure 1: which r is used for the upper bound curve? Figure 4: what is the x-axis "operations"? Why do some curves not start at 0 on the x-axis? Figure 4: what is the takeaway message from the right figure? Questions Please refer to my questions in the "Strengths And Weaknesses" section. Limitations The author has addressed the limitations and future directions to take to advance LOT. Societal impact is not discussed but I don't think it's needed.
NIPS
Title Low-rank Optimal Transport: Approximation, Statistics and Debiasing Abstract The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The lowrank optimal transport (LOT) approach advocated in Scetbon et al. [2021] holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT. 1 Introduction Optimal transport (OT) is used across data-science to put in correspondence different sets of observations. These observations may come directly from datasets, or, in more advanced applications, depict intermediate layered representations of data. OT theory provides a single grammar to describe and solve increasingly complex matching problems (linear, quadratic, regularized, unbalanced, etc...), making it gain a stake in various areas of science such as as single-cell biology Schiebinger et al. [2019], Yang et al. [2020], Demetci et al. [2020], imaging Schmitz et al. [2018], Heitz et al. [2020], Zheng et al. [2020] or neuroscience Janati et al. [2020], Koundal et al. [2020]. Regularized approaches to OT. Solving OT problems at scale poses, however, formidable challenges. The most obvious among them is computational: the Kantorovich [1942] problem on discrete measures of size n is a linear program that requires O(n3 log n) operations to be solved. A second and equally important challenge lies in the estimation of OT in high-dimensional settings, since it suffers from the curse-of-dimensionality Fournier and Guillin [2015]. The advent of regularized approaches, such as entropic regularization [Cuturi, 2013], has pushed these boundaries thanks for faster algorithms [Scetbon and Cuturi, 2020, Chizat et al., 2020, Clason et al., 2021] and improved statistical aspects [Genevay et al., 2018a]. Despite these clear strengths, regularized OT solvers remain, however, costly as they typically scale quadratically in the number of observations. Scaling up OT using low-rank couplings. While it is always intuitively possible to reduce the size of measures (e.g. using k-means) prior to solving an OT between them, a promising line of work proposes to combine both [Forrow et al., 2019, Scetbon et al., 2021, 2022]. Conceptually, these 36th Conference on Neural Information Processing Systems (NeurIPS 2022). low-rank approaches solve simultaneously both an optimal clustering/aggregation strategy with the computation of an effective transport. This intuition rests on an explicit factorization of couplings into two sub-couplings. This has several computational benefits, since its computational cost becomes linear in n if the ground cost matrix seeded to the OT problem has itself a low-rank. While these computational improvements, mostly demonstrated empirically, hold several promises, the theoretical properties of these methods are not yet well established. This stands in stark contrast to the Sinkhorn approach, which is comparatively much better understood. Our Contributions. The goal of this paper is to advance our knowledge, understanding and practical ability to leverage low-rank factorizations in OT. This paper provides five contributions, targeting theoretical and practical properties of LOT: (i) We derive the rate of convergence of the low-rank OT to the true OT with respect to the non-nnegative rank parameter. (ii) We make a first step towards a better understanding of the statistical complexity of LOT by providing an upper-bound of the statistical error, made when estimating LOT using the plug-in estimator; that upper-bound has a parametric rate O( p 1/n) that is independent of the dimension. (iii) We introduce a debiased version of LOT: as the Sinkhorn divergence [Feydy et al., 2018], we show that debiased LOT is nonnegative, metrizes the weak convergence, and that it interpolates between the maximum mean discrepancy [Gretton et al., 2012] and OT. (iv) We exhibit links between the bias induced by the low-rank factorization and clustering methods. (v) We propose practical strategies to tune the step-length and the initialization of the algorithm in [Scetbon et al., 2021]. Notations. We consider (X , dX ) and (Y, dY) two nonempty compact Polish spaces and we denote M + 1 (X ) (resp. M + 1 (Y)) the space of positive Radon probability measures on X (resp. Y). For all n 1, we denote n the probability simplex of size n and ⇤n the subset of n of positive histograms. We write 1n , (1, . . . , 1)T 2 Rn and we denote similarly k · k2 the Euclidean norm and the Euclidean distance induced by this norm depending on the context. 2 Background on Low-rank Optimal Transport Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and c : X ⇥ Y ! R+ a nonnegative and continuous function. The Kantorovitch formulation of optimal transport between µ and ⌫ is defined by OTc(µ, ⌫) , min ⇡2⇧(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) , (1) where the feasible set is the set of distributions over the product space X ⇥Y with marginals µ and ⌫: ⇧(µ, ⌫) , ⇡ 2 M+1 (X ⇥ Y) s.t. P1#⇡ = µ, P2#⇡ = ⌫ , with P1#⇡ (resp. P2#⇡), the pushforward probability measure of ⇡ using the projection maps P1(x, y) = x (resp. P2(x, y) = y). When there exists an optimal coupling solution of (1) supported on a graph of a function, we call such function a Monge map. In the discrete setting, one can reformulate the optimal transport problem as a linear program over the space of nonnegative matrices satisfying the marginal constraints. More precisely, let a and b be respectively elements of ⇤n and ⇤ m and let also X , {x1, . . . , xn} and Y , {y1, . . . , ym} be respectively two subsets of X and Y . By denoting µa,X , Pn i=1 ai xi and ⌫b,Y , Pm j=1 bj yj the two discrete distributions associated and writing C , [c(xi, yj)]i,j , the discrete optimal transport problem can be formulated as OTc(µa,X, ⌫b,Y) = min P2⇧a,b hC,P i where ⇧a,b , {P 2 Rn⇥m+ s.t. P1m = a, PT1n = b} . (2) Scetbon et al. [2021] propose to constrain the discrete optimal transport problem to couplings that have a low-nonnegative rank: Definition 1. Given M 2 Rn⇥m+ , the nonnegative rank of M is defined by: rk+(M) , min{q|M = Pq i=1 Ri, 8i, rk(Ri) = 1, Ri 0} . Note that for any M 2 Rn⇥m+ , we always have that rk+(M) min(n,m). For r 1, we consider the set of couplings satisfying marginal constaints with nonnegative-rank of at most r as ⇧a,b(r) , {P 2 ⇧a,b, rk+(P ) r}. The discrete Low-rank Optimal Transport (LOT) problem is defined by: LOTr,c(µa,X, ⌫b,Y) , min P2⇧a,b(r) hC,P i . (3) To solve this problem, Scetbon et al. [2021] show that Problem (3) is equivalent to min (Q,R,g)2C1(a,b,r)\C2(r) hC,Q diag(1/g)RT i , (4) where C1(a, b, r) , n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ (R⇤+)r s.t. Q1r = a,R1r = b o and C2(r) ,n (Q,R, g) 2 Rn⇥r+ ⇥ Rm⇥r+ ⇥ Rr+ s.t. QT1n = RT1m = g o . They propose to solve it using a mirror descent scheme and prove the non-asymptotic stationary convergence of their algorithm. While Scetbon et al. [2021] only focus on the discrete setting, we consider here its extension for arbitrary probability measures. Following [Forrow et al., 2019], we define the set of rank-r couplings satisfying marginal constraints by: ⇧r(µ, ⌫) , {⇡ 2 ⇧(µ, ⌫) : 9(µi)ri=1 2 M+1 (X )r, (⌫i)ri=1 2 M+1 (Y)r, 2 ⇤r s.t. ⇡ = rX i=1 iµi⌦⌫i} . This more general definition of LOT between µ 2 M+1 (X ) and ⌫ 2 M + 1 (Y) reads: LOTr,c(µ, ⌫) , inf ⇡2⇧r(µ,⌫) Z X⇥Y c(x, y)d⇡(x, y) . (5) Note that this definition of LOTr,c is consistent as it coincides with the one defined in (3) on discrete probability measures. Observe also that ⇧r(µ, ⌫) is compact for the weak topology and therefore the infimum in (5) is attained. See Appendix A for more details. 3 Approximation Error of LOT to original OT as a function of rank Our goal in this section is to obtain a control of the error induced by the low-rank constraint when trying to approximate the true OT cost. We provide first a control of the approximation error in the discrete setting. The proof is given in Appendix B.1. Proposition 1. Let n,m 2, X , {x1, . . . , xn} ⇢ X , Y , {y1, . . . , ym} ⇢ Y and a 2 ⇤n and b 2 ⇤m. Then for 2 r min(n,m), we have that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| kCk1 ln(min(n,m)/(r 1)) Remark 1. Note that this result improves the control obtained in [Liu et al., 2021], where they obtain that |LOTr,c(µa,X, ⌫b,Y) OTc(µa,X, ⌫b,Y)| . kCk1 p nm(min(n,m) r) as we have for any z, z0 1, | ln(z) ln(z0)| |z z0|. It is in fact possible to obtain another control of the approximation error by partitioning the space where the measures are supported. For that purpose let us introduce the notion of entropy numbers. Definition 2. Let (Z, d) a metric space, W ⇢ Z and k 1 an integer. Then by denoting BZ(z, ") , {y 2 Z : d(z, y) "}, we define the k-th (dyadic) entropy number of W as Nk(W , d) , inf{" s.t. 9 z1, . . . , z2k 2 Z : W ⇢ [2 k i=1BZ(zi, ")} . For example, any compact set W of Rd admits finite entropy numbers, and by denoting R , supw2W kwk2, we have Nk(W, k · k2) 4R/2k/d. We obtain next a control of the approximation error of LOTr,c to the true OT cost using entropy numbers (see proof in Appendix B.2). Proposition 2. Let µ 2 M+1 (X ), ⌫ 2 M + 1 (Y) and assume that c is L-Lipschitz w.r.t. x and y. Then for any r 1, we have |LOTr,c(µ, ⌫) OTc(µ, ⌫)| 2Lmax(Nblog2(b p rc)c(X , dX ),Nblog2(b p rc)c(Y, dY)) This results in the following bound for the p-Wasserstein distance for any p 1 on Rd. Corollary 1. Let d 1, p 1, X a compact subspace of Rd and µ, ⌫ 2 M+1 (X ). By denoting R , supx2X kxk2, we obtain that for any r 1, |LOTr,k·kp2 (µ, ⌫) OTk·k p 2 (µ, ⌫)| 4dp (8R2)p rp/2d . As per the Proof of Proposition 2 we can provide a tighter control, assuming a Monge map exists. Corollary 2. Under the same assumptions of Proposition 2 and by assuming in addition that there exists a Monge map solving OTc(µ, ⌫), we obtain that for any r 1, |LOTr,c(µ, ⌫) OTc(µ, ⌫)| LNblog2(r)c(Y, dY) . When X = Y are a subspaces of Rd, a sufficient condition for a Monge map to exists is that either µ or ⌫ is absolutely continuous with respect to the Lebesgue measure and that c is of the form h(x y) where h : X ! R+ is a strictly convex function [Santambrogio, 2015, Theorem 1.17]. Therefore if µ is absolutely continuous with respect to the Lebesgue measure, we obtain for any r 1 and p > 1 |LOTr,k·kp2 (µ, ⌫) OTk·kp2 (µ, ⌫)| 2dp (8R2)p rp/d . 4 Sample Complexity of LOT We now focus on the statistical performance of the plug-in estimator for LOT. In the following we assume that X = Y for simplicity. Given µ, ⌫ 2 M+1 (X ), we denote the empirical measures associated µ̂n , 1n Pn i=1 Xi and ⌫̂n , 1n Pn i=1 Yi , where (Xi, Yi) n i=1 are sampled independently from µ⌦ ⌫. We consider the plug-in estimator defined as LOTr,c(µ̂n, ⌫̂n), and we aim at quantifying the rate at which it converges towards the true low-rank optimal transport cost LOTr,c(µ, ⌫). Before doing so, in the next Proposition we show that this estimator is consistent on compact spaces. The proof is given in Appendix B.3. Proposition 3. Let r 1 and µ, ⌫ 2 M+1 (X ), then LOTr,c(µ̂n, ⌫̂n) !n!+1 LOTr,c(µ, ⌫) a.s. Next we aim at obtaining the convergence rates of our plug-in estimator. In the following Proposition, we obtain a non-asymptotic upper-bound of the statistical error. See Appendix B.4 for the proof. Proposition 4. Let r 1 and µ, ⌫ 2 M+1 (X ). Then, there exists a constant Kr such that for any > 0 and n 1, we have, with a probability of at least 1 2 , that LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n +Krkck1 "r log(40/ ) n + p r log(40/ ) n # . This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards LOTr,c at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, LOTr,c displays better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969]. In addition, that upper bound compares favorably to known results on entropic optimal transport. The rate of entropy regularized OT does not depend on the ambient dimension with respect to n, but carries an exponential dependence in dimension with respect to the regularization parameter " [Mena and Niles-Weed, 2019]. By contrast, the term associated with the nonnegative rank r in our bound has no direct dependence on dimension. Our next aim is to obtain an explicit rate with respect to r and n. In Proposition 4, we cannot control explicitly Kr in the general setting. Indeed, in our proof, we obtain that Kr , 14/mini ⇤i where ( ⇤i ) r i=1 2 ⇤ r are the weights involved in the decomposition of one optimal solution of the true LOTr,c(µ, ⌫). Therefore the control of Kr requires additional assumptions on the optimal solutions of LOTr,c(µ, ⌫). In the following Proposition, we obtain an explicit upper-bound of the plug-in estimator with respect to r and n in the asymptotic regime. Proposition 5. Let r 1, > 0 and µ, ⌫ 2 M+1 (X ). Then there exists a constant Nr, such that if n Nr, then with a probability of at least 1 2 , we have LOTr,c(µ̂n, ⌫̂n) LOTr,c(µ, ⌫) + 11kck1 r r n + 77kck1 r log(40/ ) n . Note that one cannot recover the result obtained in Proposition 5 from the one obtained in Proposition 4 as we have that Kr 14r ! r!+1 +1. In order to prove the above result, we use an extension of the McDiarmid’s inequality when differences are bounded with high probability [Kutin, 2002]. See proof in Appendix B.5 for more details. 5 Debiased Formulation of LOT We introduce here the debiased formulation of LOTr,c and show that it is able to distinguish two distributions, metrize the convergence in law and can be used as a new objective in order to learn distributions. We focus next on the debiasing terms involving measures with themselves LOTr,c(µ, µ) in this new divergence, and show that they can be interpreted as defining a new clustering method generalizing k-means for any geometry. 5.1 On the Proprieties of the Debiased Low-rank Optimal Transport When it comes to learn (or generate) a distribution in ML applications given samples, it is crucial to consider a divergence that is able to distinguish between two distributions and metrize the convergence in law. In general, LOTr,c(µ, µ) 6= 0 and the minimum of LOTr,c(⌫, µ) with respect to ⌫ will not necessarily recover µ. In order to alleviate this issue we propose a debiased version of LOTr,c defined for any µ, ⌫ 2 M+1 (X ) as DLOTr,c(µ, ⌫) , LOTr,c(µ, ⌫) 1 2 [LOTr,c(µ, µ) + LOTr,c(⌫, ⌫)] . Note that DLOTr,c(⌫, ⌫) = 0. In the next Proposition, we show that, as the Sinkhorn divergence [Genevay et al., 2018b, Feydy et al., 2018], DLOTr,c interpolates between the Maximum Mean Discrepancy (MMD) and OT. See proof in Appendix B.6. Proposition 6. Let µ, ⌫ 2 M+1 (X ). Let us assume that c is symmetric, then we have DLOT1,c(µ, ⌫) = 1 2 Z X 2 c(x, y)d[µ ⌫]⌦ d[µ ⌫](x, y) . If in addition we assume the c is Lipschitz w.r.t to x and y, then we have DLOTr,c(µ, ⌫) ! r!+1 OTc(µ, ⌫) . Next, we aim at showing some useful properties of the debiased low-rank OT for machine learning applications. For that purpose, let us first recall some definitions. Definition 3. We say that the cost c : X ⇥ X ! R+ is a semimetric on X if for all x, x0 2 X , c(x, x0) = c(x0, x) and c(x, x0) = 0 if and only if x = x0. In addition we say that c has a negative type if 8n 2, x1, . . . , xn 2 X and ↵1, . . . ,↵n 2 R such that Pn i=1 ↵i = 0, Pn i,j=1 ↵i↵jc(xi, xj) 0. We say also that c has a strong negative type if for all µ, ⌫ 2 M+1 (X ), µ 6= ⌫ =) R X 2 c(x, y)d[µ ⌫]⌦ [µ ⌫] < 0. Note that if c has a strong negative type, then c has a negative type too. For example, all Euclidean spaces and even separable Hilbert spaces endowed with the metric induced by their inner products have strong negative type. Also, on Rd, the squared Euclidean distance has a negative type [Sejdinovic et al., 2013]. We can now provide stronger geometric guarantees for DLOTr,c. In the next Proposition, we show that for a large class of cost functions, DLOTr,c is nonnegative, able to distinguish two distributions, and metrizes the convergence in law. The proof is given in Appendix B.8. Proposition 7. Let r 1, and let us assume that c is a semimetric of negative type. Then for all µ, ⌫ 2 M+1 (X ), we have that DLOTr(µ, ⌫) 0 . In addition, if c has strong negative type then we have also that DLOTr,c(µ, ⌫) = 0 () µ = ⌫ and µn ! µ () DLOTr,c(µn, µ) ! 0 . where the convergence of the sequence of probability measures considered is the convergence in law. Observe that when c has strong negative type, ⌫ ! DLOTr,c(⌫, µ) 0 and it admits a unique global minimizer at ⌫ = µ. Therefore, DLOTr,c has desirable properties to be used as a loss. It is also worth noting that, in order to obtain the metrization of the convergence in law, we show the following Proposition. See proof in Appendix B.7. Proposition 8. Let r 1 and (µn)n 0 and (⌫n)n 0 two sequences of probability measures such that µn ! µ and ⌫n ! ⌫ with respect to the convergence in law. Then we have that LOTr,c(µn, ⌫n) ! LOTr,c(µ, ⌫) . 5.2 Low-Rank Transport Bias and Clustering We turn next to the debiasing terms appearing in DLOT and exhibit links between LOT and clustering methods. Indeed, in the discrete setting, the low-rank bias of a probability measure µ defined as LOTk,c(µ, µ) can be seen as a generalized version of the k-means method for any geometry. In the next Proposition we obtain a new formulation of LOTk,c(µ, µ) viewed as a general clustering method on arbitrary metric space. See proof in Appendix B.9. Proposition 9. Let n k 1, X , {x1, . . . , xn} ⇢ X and a 2 ⇤n. If c is a semimetric of negative type, then by denoting C = (c(xi, xj))i,j , we have that LOTk,c(µa,X, µa,X) = min Q hC,Qdiag(1/QT1n)Q T i s.t. Q 2 Rn⇥k+ , Q1k = a . (6) Let us now explain in more details the link between (6) and k-means. When X is a subspace of Rd, c is the squared Euclidean distance and a = 1n, we recover exactly the k-means algorithm. Corollary 3. Let n k 1 and X , {x1, . . . , xn} ⇢ Rd. We have that LOTk,k·k22(µ1n,X, µa,X) = 2 minQ,z1,...,zk nX i=1 kX q=1 Qi,qkxi zqk 2 2 s.t. Q 2 {0, 1} n⇥k, Q1k = 1n . In the general setting, solving LOTk,c(µa,X, µa,X) for a given geometry c, and a prescribed histrogram a offers a new clustering method where the assignment of the points to the clusters is determined by the matrix Q⇤ solution of (6). 6 Computing LOT: Adaptive Stepsizes and Better Initializations We target in this section practical issues that arises when using [Scetbon et al., 2021, Algo.3] to solve (4). Scetbon et al. [2021] propose to apply a mirror descent scheme with respect to the KullbackLeibler divergence which boils down to solve at each iteration k 0 the following convex problem using the Dykstra’s Algorithm [Dykstra, 1983]: (Qk+1, Rk+1, gk+1) , argmin ⇣2C1(a,b,r)\C2(r) KL(⇣, ⇠k) . (7) where (Q0, R0, g0) 2 C1(a, b, r) \ C2(r), ⇠k , (⇠(1)k , ⇠ (2) k , ⇠ (3) k ), ⇠ (1) k , Qk exp( kCRk diag(1/gk)), ⇠ (2) k , Rk exp( kCTQk diag(1/gk)), ⇠ (3) k , gk exp( k!k/g2k) with [!k]i , [QTkCRk]i,i for all i 2 {1, . . . , r}, KL(w, r) , P i wi log(wi/ri) and ( k)k 0 is a sequence of positive step sizes. In the general setting, each iteration of their algorithm requires O(nmr) operations and when the ground cost matrix C admits a low-rank factorization of the form C = ABT where A 2 Rn⇥q and B 2 Rm⇥q with q ⌧ min(n,m), then the total complexity per iteration becomes linear O((n+m)rq). Note that for the squared Euclidean cost on Rd, we have that q = d+ 2. In the following we investigate two practical aspects of the algorithm: the choice of the step sizes and the initialization. Adaptive choice of k. Scetbon et al. [2021] show experimentally that the choice of ( k)k 0 does not impact the solution obtained upon convergence, but rather the speed at which it is attained. Indeed the larger k is, the faster the algorithm will converge. As a result, their algorithm simply relies on a fixed schedule. However, the range of admissible depends on the problem considered and it may vary from one problem to another. Indeed, the algorithm might fail to converge as one needs to ensure at each iteration k of the mirror descent scheme that the kernels ⇠k do not admit 0 entries in order to solve (7) using the Dykstra’s Algorithm. Such a situation can occur when the terms involved in the exponentials become too large which may depend on the problem considered. Therefore, it may be of particular interest for practitioners to have a generic range of admissible values for independently of the considered problem, in order to alleviate parameter tuning issues. We propose to consider instead an adaptive choice of ( k)k 0 along iterations. D’Orazio et al. [2021], Bayandina et al. [2018] have proposed adaptive mirror descent schemes where, at each iteration, the step-size is normalized by the squared dual-norm of the gradient. Applying such a strategy in our case amounts to consider at each iteration k = k (CR diag(1/g), CTQ diag(1/g), D(QTRC)/g2) k21 , (8) where the initial > 0 is fixed. By doing so, we are able to guarantee a lower-bound of the exponential terms involved in the expression of the kernels ⇠k at each iteration and prevent them from having 0 entries. We recommend to set such as global 2 [1, 10], and observe that this range works whatever the problem considered. On the choice of the initialization. As LOTr,c (4) is a non-convex optimization problem, the question of choosing an efficient initialization arises in practice. Scetbon et al. [2021] show experimentally that the convergence of the algorithm does not depend on the initalization chosen if no stopping criterion is used. Indeed, their experimental findings support that only well behaved local minimas are attractive. However, in practice one needs to use a stopping criterion in order to terminate the algorithm. We do observe in many instances that using trivial initializers may result in spurious local minima, which trigger the stopping criterion early on and prevent the algorithm to reach a good solution. Based on various experimentations, we propose to consider a novel initialization of the algorithm. Our initialization aims at being close to a well-behaved local minimum by clustering the input measures. When the measures are supported on Euclidean space, we propose to find r centroids (zi)ri=1 of one of the two input discrete probability measures using k-means and to solve the following convex barycenter problem: min Q,R hCX,Z , Qi+ hCY,Z , Ri "H(Q) "H(R) s.t. Q1n = a, R1n = b, QT1r = RT1r , (9) where CX,Z = (c(xi, zj))i,j , CY,Z = (c(yi, zj))i,j , and H(P ) = P i,j Pi,j(log(Pi,j 1). In practice we fix " = 1/10 and we then initialize LOTr,c using (Q,R) solution of (9) and g , QT1r(= RT1r). Note that (Q,R, g) is an admissible initialization and finding the centroids as well as solving (9) requires O((n + m)r) algebraic operations. Therefore such initialization does not change the total complexity of the algorithm. In the general (non-Euclidean) case, we propose to initialize the algorithm by applying our generalized k-means approach defined in (6) on each input measure where we fix the common marginal to be g = 1r/r. More precisely, by denoting CX,X = (c(xi, xj))i,j and CY,Y = (c(yi, yj))i,j , we initialize the algorithm by solving: Q 2 argmin Q hCX,X , Qdiag(1/QT1n)QT i s.t. Q 2 Rn⇥k+ , Q1k = a, QT1n = 1r/r . R 2 argmin R hCY,Y , Rdiag(1/RT1m)RT i s.t. R 2 Rm⇥k+ , R1k = b, RT1n = 1r/r . (10) Note that again the (Q,R, g) obtained is an admissible initialization and the complexity of solving (10) is of the same order as solving (4), thus the total complexity of the algorithm remains the same. 7 Experiments In this section, we illustrate experimentally our theoretical findings and show how our initialization provide practical improvements. For that purpose we consider 3 synthetic problems and one real world dataset to: (i) provide illustrations on the statistical rates of LOTr,c, (ii) exhibit the gradient flow of the debiased formulation DLOTr,c, (iii) use the clustering method induced by LOTr,c, and (iv) show the effect of the initialization. All experiments were run on a MacBook Pro 2019 laptop. Statistical rates. We aim at showing the statistical rates of the plug-in estimator of LOTr,c. As LOTr,c(µ, µ) 6= 0 and as we do not have access to this value given samples from µ, we consider instead the debiased version of the low-rank optimal transport, DLOTr,c. In figure 1, we show that the empiricial rates match the theoretical bound obtained in Proposition 4. In particular, we show that that these rates does not depend on the dimension of the ground space. Note also that we recover our theoretical dependence with respect to the rank r: the higher the rank, the slower the convergence. Gradient Flows using DLOT. We illustrate here a practical use of DLOT for ML application. In figure 6, we consider Y1, . . . , Yn independent samples from a moon shape distribution in 2D, and by denoting ⌫̂n the empirical measure associated, we show the iterations obtained by a gradient descent scheme on the following optimization problem: min X2Rn⇥2 DLOTr,c(µ1n/n,X, ⌫̂n) . We initialize the algorithm using n = 1000 samples drawn from a Gaussian distribution. We show that the gradient flow of our debiased version is able to recover the target distribution. We also compare it with the gradient flow of the biased version (LOT) and show that it fails to reproduce the target distribution as it is learning a biased one with a low-rank structure. Application to Clustering. In this experiment we show some applications of the clustering method induced by LOTr,c. In figure 3, we consider 6 datasets with different structure and we aim at recovering the clusters using (6) for some well chosen costs. We compare the clusters obtained when considering either the squared Euclidean cost (which amounts at applying the k-means) and the shortest-path distance on the data viewed as a graph. We show that our method is able to recover the clusters on these settings for well chosen costs and therefore the proposed algorithm in Scetbon et al. [2021] can be seen as a new alternative in order to clusterize data. Effect of the Initialization. Our goal here is to show the effect of the initialization. In figure 4, we display the evolution of the cost as well as the value of the stopping criterion along the iterations of the MD scheme solving (4) when considering different initialization. The x-axis corresponds to the total number of algebraic operations. This number is computed at each iteration of the outer loop of the algorithm proposed in Scetbon et al. [2021] and is obtained by computing the complexity of all the operations involved in their algorithm to reach it. We consider this notion of time instead of CPU/GPU time as we do not want to be architecture/machine dependent. Recall also that the stopping criterion introduced in [Scetbon et al., 2021] is defined for all k 1 by k , 1 2k (KL((Qk, Rk, gk), (Qk 1, Rk 1, gk 1)) + KL((Qk 1, Rk 1, gk 1), (Qk, Rk, gk))), where ((Qk, Rk, gk))k 0 is the sequence solution of (7). First, we show that whatever the initialization chosen, the algorithm manages to converge to an efficient solution if no stopping criterion is used. However, the choice of the initialization may impact the termination of the algorithm as some initialization might be too close to some spurious local minima. Indeed, the initial points obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. We show also that the initialization we propose in (9) and (10) are sufficiently far away from bad local minima and allow the algorithm to converge directly toward the desired solution. The right figure of Fig.4 shows two main observations: (i) that the initial point obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. (ii) When initialiazing the algorithm using kmeans methods, we show that our stopping criterion is a decreasing function of time meaning that the algorithm converges directly towards the desired solution. Conclusion. We assembled in this work theoretical and practical arguments to support low-rank factorizations for OT. We have presented two controls: one concerning the approximation error to the true optimal transport and another concerning the statistical rates of the plug-in estimator. The latter is showed to be independent of the dimension, which is of particular interest when studying OT in ML settings. We have motivated further the use of LOT as a loss by introducing its debiased version and showed that it possesses desirable properties: positivity and metrization of the convergence in law. We have also presented the links between the bias induced by such regularization and clustering methods, and studied empirically the effects of hyperparameters involved in the practical estimation of LOT. The strong theoretical foundations provided in this paper motivate further studies of the empirical behaviour of LOT estimator, notably on finding suitable local minima and on improvements on the convergence of the MD scheme using other adaptive choices for step sizes. Acknowledgements. This work was supported by a "Chaire d’excellence de l’IDEX Paris Saclay". The authors would also like to thank Gabriel Peyré and Jaouad Mourtada for enlightening conversations on the topics discussed in this work.
1. What is the focus of the paper regarding low-rank OT? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and practical applications? 3. Do you have any concerns or questions about the paper's content, such as figures or typos? 4. How does the reviewer assess the limitations of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper provides an enormous amount of theoretical analysis about low-rank OT: the convergence rate of low-rank OT to the true OT wrt rank parameter, sample complexity for estimating LOT; introduce debiased version of LOT which metrizes the weak convergence; bridge LOT with clustering methods. Practically, they propose a novel initialization to avoid the bad local minima. Strengths And Weaknesses Strengths: This paper provides the rigorous theoretical analysis of low-rank OT: the convergence rate of low-rank OT to the true OT wrt rank parameter, sample complexity for estimating LOT is dimensional independent; introduce debiased version of LOT which metrizes the weak convergence; LOT ( μ , μ ) can be seen as a generalization of the k-means method Practically, they also propose a novel initialization to avoid the bad local minima. Weakness: The example in Figure 2 is too simple. Swiss roll or two moons in figure 3.2 is more convincing. Questions Question: Figure 2, which stage is the middle two plots of Figure 2 in gradient flow? They seem to be in a very late stage of convergence, but a weird phenomenon is that the gradient flows of both DLOT and LOT exceed the target distribution firstly, and then come back. Especially when looking at those green arrows, they firstly point outside the moon, then point inside the moon. I think if you solve gradient flow correctly, it will not have this "exceed first and then pull back" process. Typo: row 142 delete "in" all the equation (6) is referenced as (16) Limitations .
NIPS
Title Parabolic Approximation Line Search for DNNs Abstract A major challenge in current optimization research for deep learning is to automatically find optimal step sizes for each update step. The optimal step size is closely related to the shape of the loss in the update step direction. However, this shape has not yet been examined in detail. This work shows empirically that the batch loss over lines in negative gradient direction is mostly convex locally and well suited for one-dimensional parabolic approximations. By exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps. Our approach combines well-known methods such as parabolic approximation, line search and conjugate gradient, to perform efficiently. It surpasses other step size estimating methods and competes with common optimization methods on a large variety of experiments without the need of hand-designed step size schedules. Thus, it is of interest for objectives where step-size schedules are unknown or do not perform well. Our extensive evaluation includes multiple comprehensive hyperparameter grid searches on several datasets and architectures. Finally, we provide a general investigation of exact line searches in the context of batch losses and exact losses, including their relation to our line search approach. 1 Introduction Automatic determination of optimal step sizes for each update step of stochastic gradient descent is a major challenge in current optimization research for deep learning [3,5,12,29,38,43,46,50,58]. One default approach to tackle this challenge is to apply line search methods. Several of these have been introduced for Deep Learning [12, 29, 38, 43, 58]. However, these approaches have not analyzed the shape of the loss functions in update step direction in detail, which is important, since the optimal step size stands in strong relation to this shape. To shed light on this, our work empirically analyses the shape of the loss function in update step direction for deep learning scenarios often considered in optimization. We further elaborate the properties found to define a simple, competitive, empirically justified optimizer. Our contributions are as follows: 1: Empirical analysis suggests that the loss function in negative gradient direction mostly shows locally convex shapes. Furthermore, we show that parabolic approximations are well suited to estimate the minima in these directions (Section 3). 2: Exploiting the parabolic property, we build a simple line search optimizer which constructs its own loss function dependent learning rate schedule. The performance of our optimization method is extensively analyzed, including a comprehensive comparison to other optimization methods (Sections 4,5). 3: We provide a convergence analysis which backs our empirical results, under strong assumptions (Section 4.4). 4: We provide a general investigation of exact line searches on batch losses and their relation to line searches on the exact loss as well as their relation to our line search approach (Section 6) and, finally, analyze the relation of our approach to interpolation (Section 7). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. The empirical loss L is defined as the average over realizations of a batch-wise loss function L: L(θ) : Rm → R, θ 7→ n−1 ∑n i=1 L(xi; θ) with n being the amount of batches, xi denotes a batch of a dataset and θ ∈ Rm denotes the parameters to be optimized. Note, that we consider a sample as one batch of multiple inputs. We denote L(xt; θt) the batch loss of a batch x at optimization step t. In this work, we consider L(xt; θt) in negative gradient direction: lt(s) : R→ R, s 7→ L(xt; θt + s · −gt ||gt|| ) (1) where gt is∇θtL(xt; θt). For simplification, we denote lt(s) a line function or vertical cross section and s a step on this line. The motivation of our work builds upon the following assumption: Assumption 1. (Informal) The position θmin = θt + smin −gt||gt|| of a minimum of lt is a well enough estimator for the position of the minimum of the empirical loss L on the same line to perform a successful optimization process. We empirically analyze Assumption 1 further in section 6. 2 Related work Our optimization approach is based on well-known methods, such as line search, the non linear conjugate gradient method and quadratic approximation, which can be found in Numerical Optimization [28], which, in addition, describes a similar line search routine for the deterministic setting. The concept of parabolic approximations is also exploited by the well known line search of More and Thunte [40]. Our work contrasts common optimization approaches in deep learning by directly exploiting the parabolic property (see Section 3) of vertical cross sections of the batch loss. Similarly, SGD-HD [3] performs update steps towards the minimum on vertical cross sections of the batch loss, by performing gradient descent on the learning rate. Concurrently, [10] explored a similar direction as this work by analyzing possible line search approximations for DNN loss landscapes, but does not exploit these for optimization. The recently published Stochastic Line-Search (SLS) [58] is an optimized backtracking line search based on the Armijo condition, which samples, like our approach, additional batch losses from the same batch and checks the Armijo condition on these. [58] assumes that the model interpolates the data. Formally, this implies that the gradient at a minimum of the empirical loss is 0 for the empirical loss as well as for all batch (sample) losses. [12] also uses a backtracking Armijo line search, but with the aim to regulate the optimal batch size. SLS exhibits competitive performance against multiple optimizers on several DNN tasks. [43] introduces a related idea but does not provide empirical results for DNNs. The methodically appealing but complex Probabilistic Line Search (PLS) [38] and Gradient Only Line Search (GOLS1) [29] are considering a discontinuous stochastic loss function. GOLS1 searches for a minimum on lines by searching for a sign change of the first directional derivative in search direction. PLS optimizes on lines of a stochastic loss function by approximating it with a Gaussian Process surrogate and exploiting a probabilistic formulation of the Wolf conditions. Both approaches show that they can optimize successfully on several machine learning problems and can compete against plain SGD. From the perspective of assumptions about the shape of the loss landscape, second order methods such as oLBFGS [53], KFRA [7], L-SR1 [45], QUICKPROP [15], S-LSR1 [4], and KFAC [39] generally assume that the loss function can be approximated locally by a parabola of the same dimension as the loss function. Adaptive methods such as SGD with momentum [49], ADAM [30], ADAGRAD [14], ADABOUND [37], AMSGRAD [47] or RMSProp [57] focus more on the handling of noise than on shape assumptions. In addition, methods exist that approximate the loss function in specific directions: The L4 adaptation scheme [50] as well as ALIG [5] estimate step sizes by approximating the loss function linearly in negative gradient direction, whereas our approach approximates the loss function parabolically in negative gradient direction. Finally, COCOB [42] has to be mentioned, an alternative learning rate free approach, which automatically estimates step directions and sizes with a reward based coin betting concept. 3 Empirical analysis of the shape of batch losses on vertical cross sections In this section we analyze line functions (see Eq. 1) during the training of multiple architectures and show that they locally exhibit mostly convex shapes, which are well suited for parabolic approximations. We focus on CIFAR-10, as it is extensively analyzed in optimization research for deep learning. However, on random samples of MNIST, CIFAR-100 and ImageNet we observed the same results. We analyzed cross sections of 4 common used architectures in detail. To do so, we evaluated the cross sections of the first 10000 update steps for each architecture. For each cross section we sampled 50 losses and performed a parabolic approximation (see Section 4). An unbiased selection of our results on a ResNet32 is shown in Figure 1. Further results are given in Appendix A. In accordance with [59], we conclude that the analyzed cross sections tend to be locally convex. In addition, one-dimensional parabolic approximations of the form f(s) = as2 + bs+ c with a 6= 0 are well suited to estimate the position of a minimum on such cross sections. To substantiate the later observation, we analyzed the angle between the line direction and the gradient at the estimated minimum during training. A position is a local extremum or saddle point of the cross section if and only if the angle between the line direction and the gradient at the position is 90◦, if measured on the same batch. 1 As shown in Figures 2 and 3, this property holds well for several architectures trained on MNIST, CIFAR-10, CIFAR-100 and ImageNet. The property fits best for MNIST and gets worse for more complex tasks such as ImageNet. We have to note, that measuring step sizes and update step adaptations factors (see Sections 4.1 and4.3) were chosen to fit the line functions decently. We can ensure that the extrema found are minima, since we additionally plotted the line function for each update step. In addition, we analyzed vertical cross sections in conjugate like directions and random directions. Vertical cross section in conjugate like directions also tend to have convex shapes (see Appendix D.4 Figure 17 ). However, vertical cross sections in random directions rarely exhibit convex shapes. Figure 2: Angles between the line direction and the gradient at the estimated minimum measured on the same batch. If the angle is 90◦, the estimated minimum is a real local minimum. We know from additional line plots that the found extrema or saddle points are minima. Left: measurement over the first 10 epochs. Right: measurement over the first 60 epochs. Update step adaptation (see Section 4.3) is applied. 1This holds because if the directional derivative of the measured gradient in line direction is 0, the current position is an extremum or saddle point of the cross sections and the angle is 90◦. If the position is not a extremum or saddle point, the directional derivative is not 0 [28]. 4 The line search algorithm By exploiting the property, that parabolic approximations are well suited to estimate the position of minima on line functions, we introduce Parabolic Approximation Line Search (PAL). This simple approach combines well-known methods from basic optimization such as parabolic approximation and line search [28], to perform an efficient line search. We note, that the general idea of this method can be applied to any optimizer that provides an update step direction. 4.1 Parameter update rule An intuitive explanation of PAL’s parameter update rule based on a parabolic approximation is given in Figure 4. Since lt(s) (see Eq.1) is assumed to exhibit a convex and almost parabolic shape, we approximate it with l̂t(s) = as2 + bs + c with a 6= 0 and a, b, c ∈ R. Consequently, we need three measurements to define a, b and c. Those are given by the current loss lt(0), the derivative ||gt|| ) where gt is ∇θtL(xt; θt). The red curve is its parabolic approximation l̂(s). With l(0), l(µ) and gt (orange), we have the three parameters needed to determine the update step supd to the minimum of the parabolic approximation. in gradient direction l′t(0) = −||gt|| (see Eq. 4) and an additional loss lt(µ) with measuring distance µ ∈ R+. We get a = lt(µ)−lt(0)−l ′ t(0)µ µ2 , b = l′t(0), and c = lt(0). The update step supd to the minimum of the parabolic approximation l̂t(s) is thus given by: supdt = − l̂′t(0) l̂′′t (0) = − b 2a = −l′t(0) 2 lt(µ)−lt(0)−l′t(0)µ µ2 (2) Note, that l̂′′t (0) is the second derivative of the approximated parabola and is only identical to the exact directional derivative −gt ||gt||H(L(xt; θt)) −gTt ||gt|| if the parabolic approximation fits. The normalization of the gradient to unit length (Eq.1) was chosen to have the measuring distance µ independent of the gradient size and of weight scaling. Note that two network inferences are required to determine lt(0) and lt(µ). Consequently, PAL needs two forward passes and one backward pass through a model. Further on, the batch loss L(xt; θt) may include random components, but, to ensure con- tinuity during one line search, drawn random numbers have to be reused for each value determination of L at t (e.g. for Dropout [55]. The memory required by PAL is similar to SGD with momentum, since only the last update direction has to be saved. A basic, well performing version of PAL is given in Algorithm 1. Algorithm 1 The basic version of our proposed line search algorithm. See Section 4 for details. Input: µ: measuring step size Input: L(x; θ): loss function Input: x: list of input vectors Input: θ0: initial parameter vector 1: t← 0 2: while θt not converged do 3: l0 ← L(xt; θt) # l0 = lt(0) see Eq. 1 4: gt ← −∇θtL(xt; θt) 5: lµ ← L(xt; θt + µ gt||gt|| ) 6: b← −||gt|| 7: a← lµ−l0−bµµ2 8: if proper curvature then 9: supd ← − b2a 10: else 11: # set supd according to section 4.2 12: end if 13: θt+1 ← θt + supd gt||gt|| 14: t← t+ 1 15: end while 16: return θt 4.2 Case discrimination of parabolic approximations Since not all parabolic approximations are suitable for parameter update steps, the following cases are considered separately. Note that b = l′t(0) and a = 0.5l ′′ t (0). 1: a > 0 and b < 0: parabolic approximation has a minimum in line direction, thus, the parameter update is done as described in Section 4.1. 2: a ≤ 0 and b < 0: parabolic approximation has a maximum in negative line direction, or is a line with negative slope. In those cases a parabolic approximation is inappropriate. supd is set to µ, since the second measured point has a lower loss than the first. 3: Since b = −||gt|| cannot be greater than 0, the only case left is an extremum at the current position (l′(0) = 0). In this case, no weight update is performed. However, the loss function is changed by the next batch. In accordance to Section 3, cases 2 and 3 appeared very rarely in our experiments. 4.3 Additions We introduce multiple additions for Algorithm 1 to fine tune the performance and handle degenerate cases. We emphasize that our hyperparameter sensitivity analysis (Appendix D.6) suggests that the influence of the introduced hyperparameters on the optimizer’s performance are low. Thus, they only need to be adapted to fine tune the results. The full version of PAL including all additions is given in Appendix B Algorithm 2. Direction adaptation: Instead of following the direction of the negative gradient we follow an adapted conjugate-like direction dt: dt = −∇θtL(xt; θt) + βdt−1 d0 = −∇θ0L(x0; θ0) (3) with β ∈ [0, 1]. Since now an adapted direction is used, l′t(0) changes to: l′t(0) = ∇θtL(xt; θt) dt ||dt|| (4) This approach aims to find a more optimal search direction than the negative gradient. We implemented and tested the formulas of Fletcher-Reeves [16], Polak-Ribière [48], Hestenes-Stiefel [24] and Dai-Yuan [11] to determine conjugate directions under the assumption that the loss function is a quadratic. However, choosing a constant β of value 0.2 or 0.4 performs equally well. The influence of β and dynamic update steps on PAL’s performance is discussed in Appendix D.5. In the analyzed scenario β can both increase and decrease the performance, whereas, dynamic update steps mostly increase the performance. The combination of both is needed to achieve optimal results. Update step adaptation: Our preliminary experiments revealed a systematic error caused by constantly approximating with slightly too narrow parabolas. Therefore, supd is multiplied by a parameter α ≥ 1 (compare to Eq. 2). This is useful to estimate the position of the minimum on a line more exactly, but has minor effects on training performance. Maximum step size: To hinder the algorithm from failing due to inaccurate parabolic approximations, we use a maximum step size smax. The new update step is given by min(supd, smax). However, most of our experiments with smax = 100.5 ≈ 3.16 never reached this step size and still performed well. 4.4 Theoretical considerations Usually, convergence in deep learning is shown for convex stochastic functions with a L-Lipschitz continuous gradient. However, since our approach originates from empirical results, it is not given that a profound theoretical analysis is possible. In order to show any convergence guarantees for parabolic approximations, we have to fall back to uncommonly strong assumptions which lead to quadratic models. Since convergence proofs on quadratics are of minor importance for most readers, our derivations can be found in Appendix C. 5 Evaluation 5.1 Experimental design We performed a comprehensive evaluation to analyze the performance of PAL on a variety of deep learning optimization tasks. Therefore, we tested PAL on commonly used architectures on CIFAR10 [31], CIFAR-100 [31] and ImageNet [13]. For CIFAR-10 and CIFAR-100, we evaluated on DenseNet40 [25], EfficientNetB0 [56], ResNet32 [23] and MobileNetV2 [52]. On ImageNet we evaluated on DenseNet121 and ResNet50. In addition, we considered an RNN trained on the Tolstoi war and peace text prediction task. We compare PAL to SLS [58], whose Armijo variant is state-of-theart in the line search field for DNNs. In addition, we compare against the following well studied and widely used first order optimizers: SGD with momentum [49], ADAM [30], and RMSProp [57] as well as against SGDHD [3], ALIG [5], which automatically estimate learning rates in negative gradient direction and, finally, against the coin betting approach COCOB [42]. To perform a fair comparison, we compared a variety of hyperparameter combinations of commonly used hyperparameters for each optimizer. In addition, we utilize those combinations to analyze the hyperparameter sensitivity for each optimizer. Since a grid search on Imagenet was too expensive, the best hyperparameter configuration from the CIFAR-100 evaluation was used to test hyperparameter transferability. A detailed explanation of the experiments including hyperparameters and data augmentations used are given in Appendix D.8. All in all, we trained over 4500 networks with Tensorflow 1.15 [1] on Nvidia Geforce GTX 1080 TI graphic cards. Since PAL is a line search approach, the predefined learning rate schedules of SGD and the generated schedules of SLS, ALIG, SGDHD and PAL were compared. Due to normalization, PAL’s learning rate is given by supdt/||dt||. 5.2 Results A selection of our results is given in Figure 5. The results of other architectures trained on CIFAR-10, CIFAR-100, Imagenet and Tolstoi are found in Appendix D Figures 13,14,15. A table with exact numerical results of all experiments is provided in Appendix D.9. In most cases PAL decreases the training loss faster and to a lower value than the other optimizers (row 1 of Figures 5,13,14,15). Considering validation and test accuracy, PAL surpasses ALIG, SGDHD and COCOB, competes with RMSProp and ADAM but gets surpassed by SGD (rows 2,3 of Figures 5,13,14,15). However, RMSProp, ADAM and SGD were tuned with a step size schedule. If we compare PAL to their basic implementations without a schedule, which roughly corresponds to the first plateau reached in row 2 of Figures 5,13,14,15, PAL would surpass the other optimizers and shows that it can find a well performing step size schedule. This is especially interesting for problems for which default schedules might not work. SLS decreases the training loss further than the other optimizers on a few problems, but shows weak performance and poor generalization on most. This contrasts to the results of [58], where SLS behaves robustly and excels. To exclude the possibility of errors on our side, we reimplemented SLS experiment on ResNet34 and could reproduce a similar well performance as in [58] (Appendix D.3). Our results suggest, that the interpolation assumption on which SLS is based, is not always valid for the considered tasks. Considering the box plots of Figures 5 and 14, which represent the sensitivity to hyperparameter combinations, one would likely try on a new unknown objective, we can see, that PAL has a strong tendency to exhibit low sensitivity in combination with good performance. To emphasize this statement, a sensitivity analysis of PAL’s hyperparameters (Appendix Figure 19) shows that PAL performs well on a wide range for each hyperparameter on a ResNet32. On wall-clock-time PAL performs as fast as SLS but slower than the other optimizers, which achieve similar speeds (Appendix D.2). However, depending on the scenario, an automatic, well performing leaning rate schedule might compensate for the slower speed. Considering the learning rate schedules of PAL (row 4 of Figures 5,13,14,15) we achieved unexpected results. PAL, which estimates the learning rate directly from approximated local shape information, does not follow a schedule that is similar to the one of SLS, ALIG, SGDHD or any of the common used hand crafted schedules such as piece wise constant or cosine decay. However, it achieves similar results. An interesting side result is that ALIG and SGDHD tend to perform best, if hyperparameters are chosen in a way that the learning rate is only changed slightly and therefore virtually an SGD training with fixed learning rate is performed. 6 On the exactness of line searches on batch losses In this section we investigate the general question whether line searches which estimate the location of the minimum of batch losses exactly are beneficial. In Figure 2 we showed that PAL can perform an almost exact line search on batch losses if we use a fixed update step adaptation factor (Section 4.3). However, PAL’s best hyperparameter configuration does not perform an exact line search (see Figure 6). Consequently, we analyzed how an exact line search, which exactly estimates a minimum of the line function, behaves. We implemented an inefficient binary line search (see Appendix E), which measured up to 20 values on each line to estimate the position of a minimum. The results, given in Figure 6, show that an optimal line search does not optimize well. Thus, the reason why PAL performs well is not the exactness of its update steps. In fact, slightly inexact update steps seem to be beneficial. These results query Assumption 1, which assumes that the position of a minimum on a line in negative gradient direction of the batch loss L(xt; θ) is a suitable estimator for the minimum of the empirical loss L on this line to perform a successful optimization process. To investigate this further, we tediously measured the empirical loss L and the distribution of batch losses for one training process on a ResNet32. Our results suggest, as exemplary shown in Figure 7, that on a line function defined by the gradient of L(xt; θ), the position of the minimum of L(xt; θ) is not always a good estimator for the position of the minimum of the empirical loss L. This explains why exact line searches on the batch loss perform weak. Corollaries are that the empirical loss on the investigated lines also tends to be locally convex and that the optimal step size tends to be smaller than the step size given by the batch loss on such lines. This is a possible explanation why the slightly too narrow parabolic approximations of PAL without update step adaptation perform well. 7 PAL and Interpolation This section analyzes whether the reason why PAL performs well is related to the interpolation condition. Formally, interpolation requires that the gradient with respect to each sample converges to zero at the optimum. We repeated the experiments of the SLS paper (see [58] Section 7.2 and 7.3), which analyze the performance on problems for which interpolation hold or does not hold. Figure 8 shows that PAL such as SLS converge faster to an artificial optimization floor on nonover-parameterized models (k = 4) of the matrix factorization problem of [58] Section 7.2. In the interpolation case PAL and SLS converge linearly to machine precision. On the binary classification problem of [58] Section 7.3, which uses a softmax loss and RBF kernels on the mushrooms and ijcnn datasets, we observe that PAL and SLS converge fast on the mushrooms task, for which the interpolation condition holds (Figure 9). However, PAL converges faster on the ijcnn task, for which the interpolation condition does not hold. The results indicate that the interpolation condition is beneficial for PAL, but, PAL performs also robust when it is likely not satisfied (see Figure 5,13,14,15. In those experiments PAL mostly performs competitive but SLS does not. However, the relation of the parabolic property to interpolation needs to be investigated more closely in future. 8 Conclusions This work tackles a major challenge in current optimization research for deep learning: to automatically find optimal step sizes for each update step. In detail, we focus on line search approaches to deal with this challenge. We introduced a simple, robust and competitive line search approach based on one-dimensional parabolic approximations of batch losses. The introduced algorithm is an alternative to SGD for objectives where default decays are unknown or do not work. Loss functions of DNNs are commonly perceived as being highly non-convex. Our analysis suggests that this intuition does not hold locally, since lines of loss landscapes across models and datasets can be approximated parabolically to high accuracy. This new knowledge might further help to explain why update steps of specific optimizers perform well. To gain deeper insights of line searches in general, we analyzed how an expensive but exact line search on batch losses behaves. Intriguingly, its performance is weak, which lets us conclude that the small inaccuracies of the parabolic approximations are beneficial for training. Potential Broader Impact Since we understand our work as basic research, it is extremely error-prone to estimate its specific ethical aspects and future positive or negative social consequences. As optimization research influences the whole field of deep learning, we refer to the following works, which discuss the ethical aspects and social consequences of AI and Deep Learning in a comprehensive and general way: [6, 41, 61]. Acknowledgments Maximus Mutschler heartly thanks Lydia Federmann, Kevin Laube, Jonas Tebbe, Mario Laux, Valentin Bolz, Hauke Neitzel, Leon Varga, Benjamin Kiefer, Timon Höfer, Martin Meßmer, Cornelia Schulz, Hamd Riaz, Nuri Benbarka, Samuel Scherer, Frank Schneider, Robert Geirhos and Frank Hirschmann for their comprehensive support. Funding This research was supported by the German Federal Ministry of Education and Research (BMBF) project ’Training Center Machine Learning, Tübingen’ with grant number 01|S17054.
1. What is the main contribution of the paper regarding training deep networks? 2. What are the strengths of the proposed method, particularly in its ability to approximate the loss surface? 3. What are some concerns or weaknesses of the paper, especially regarding its claims about exact line search and generalization performance? 4. Are there any suggestions for improving the proposed method, such as reusing learning rates or scaling hyperparameters? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a line search method for training deep networks. They propose locally approximating the loss surface with a parabola using two estimates of the loss on a mini-batch and then inferring the step size based on this approximation. They provide evidence that this approximation is reliable on a number of datasets and for a number of models. They also provide evidence that their approximate line search is in fact performing better than exact line search as well as previous line search methods and is better or competitive with SGD in converged training and validation loss/accuracy. Their method is however not beating the generalization performance of SGD on some datasets and is not as approximately twice as slow as SGD. Some suggestions are given below. Strengths - Fig1 is interesting and supports the claim of the paper on parabolic estimate of the loss. What happens closer to the end of the training? After 50K iterations on cifar-10 and two learning rate drops? In appendix EfficientNet and MobileNet seem not to always follow the assumption (figure 8 and 9 top left subfigures). How often does that happen? - Fig 4: The learning rate curves for the proposed method seems fairly smooth. Have you tried reusing the learning rate from previous steps to reduce the computational cost? Or is the smoothness an artifact of the averaging over 3 runs? - Fig 5-6: The curves supporting the claim that exact line search is not desirable are fairly convincing. It would be nice to do a hyperparameter study of the proposed method where the estimated step size is scaled by a constant and plot the performance as a function of this hyperparameter. Do we need to always underestimate or do we need to sometimes overestimate the step size? Weaknesses - Fig 4: The proposed method does not generalize as well as SGD. Have you used weight decay? - Fig 4: The generalization gap between Adam and SGD should mostly go away with appropriate weight decay. See the following paper: Loshchilov and Hutter, Decoupled Weight Decay Regularization, ICLR 2019 Zhang et al, Three Mechanisms of Weight Decay Regularization, ICLR 2019 - Fig 4: the comparison with SGD on imagenet does not seem fair as we know there exists a learning rate schedule that works well. Also it’s not clear why other methods are not shown on imagenet.
NIPS
Title Parabolic Approximation Line Search for DNNs Abstract A major challenge in current optimization research for deep learning is to automatically find optimal step sizes for each update step. The optimal step size is closely related to the shape of the loss in the update step direction. However, this shape has not yet been examined in detail. This work shows empirically that the batch loss over lines in negative gradient direction is mostly convex locally and well suited for one-dimensional parabolic approximations. By exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps. Our approach combines well-known methods such as parabolic approximation, line search and conjugate gradient, to perform efficiently. It surpasses other step size estimating methods and competes with common optimization methods on a large variety of experiments without the need of hand-designed step size schedules. Thus, it is of interest for objectives where step-size schedules are unknown or do not perform well. Our extensive evaluation includes multiple comprehensive hyperparameter grid searches on several datasets and architectures. Finally, we provide a general investigation of exact line searches in the context of batch losses and exact losses, including their relation to our line search approach. 1 Introduction Automatic determination of optimal step sizes for each update step of stochastic gradient descent is a major challenge in current optimization research for deep learning [3,5,12,29,38,43,46,50,58]. One default approach to tackle this challenge is to apply line search methods. Several of these have been introduced for Deep Learning [12, 29, 38, 43, 58]. However, these approaches have not analyzed the shape of the loss functions in update step direction in detail, which is important, since the optimal step size stands in strong relation to this shape. To shed light on this, our work empirically analyses the shape of the loss function in update step direction for deep learning scenarios often considered in optimization. We further elaborate the properties found to define a simple, competitive, empirically justified optimizer. Our contributions are as follows: 1: Empirical analysis suggests that the loss function in negative gradient direction mostly shows locally convex shapes. Furthermore, we show that parabolic approximations are well suited to estimate the minima in these directions (Section 3). 2: Exploiting the parabolic property, we build a simple line search optimizer which constructs its own loss function dependent learning rate schedule. The performance of our optimization method is extensively analyzed, including a comprehensive comparison to other optimization methods (Sections 4,5). 3: We provide a convergence analysis which backs our empirical results, under strong assumptions (Section 4.4). 4: We provide a general investigation of exact line searches on batch losses and their relation to line searches on the exact loss as well as their relation to our line search approach (Section 6) and, finally, analyze the relation of our approach to interpolation (Section 7). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. The empirical loss L is defined as the average over realizations of a batch-wise loss function L: L(θ) : Rm → R, θ 7→ n−1 ∑n i=1 L(xi; θ) with n being the amount of batches, xi denotes a batch of a dataset and θ ∈ Rm denotes the parameters to be optimized. Note, that we consider a sample as one batch of multiple inputs. We denote L(xt; θt) the batch loss of a batch x at optimization step t. In this work, we consider L(xt; θt) in negative gradient direction: lt(s) : R→ R, s 7→ L(xt; θt + s · −gt ||gt|| ) (1) where gt is∇θtL(xt; θt). For simplification, we denote lt(s) a line function or vertical cross section and s a step on this line. The motivation of our work builds upon the following assumption: Assumption 1. (Informal) The position θmin = θt + smin −gt||gt|| of a minimum of lt is a well enough estimator for the position of the minimum of the empirical loss L on the same line to perform a successful optimization process. We empirically analyze Assumption 1 further in section 6. 2 Related work Our optimization approach is based on well-known methods, such as line search, the non linear conjugate gradient method and quadratic approximation, which can be found in Numerical Optimization [28], which, in addition, describes a similar line search routine for the deterministic setting. The concept of parabolic approximations is also exploited by the well known line search of More and Thunte [40]. Our work contrasts common optimization approaches in deep learning by directly exploiting the parabolic property (see Section 3) of vertical cross sections of the batch loss. Similarly, SGD-HD [3] performs update steps towards the minimum on vertical cross sections of the batch loss, by performing gradient descent on the learning rate. Concurrently, [10] explored a similar direction as this work by analyzing possible line search approximations for DNN loss landscapes, but does not exploit these for optimization. The recently published Stochastic Line-Search (SLS) [58] is an optimized backtracking line search based on the Armijo condition, which samples, like our approach, additional batch losses from the same batch and checks the Armijo condition on these. [58] assumes that the model interpolates the data. Formally, this implies that the gradient at a minimum of the empirical loss is 0 for the empirical loss as well as for all batch (sample) losses. [12] also uses a backtracking Armijo line search, but with the aim to regulate the optimal batch size. SLS exhibits competitive performance against multiple optimizers on several DNN tasks. [43] introduces a related idea but does not provide empirical results for DNNs. The methodically appealing but complex Probabilistic Line Search (PLS) [38] and Gradient Only Line Search (GOLS1) [29] are considering a discontinuous stochastic loss function. GOLS1 searches for a minimum on lines by searching for a sign change of the first directional derivative in search direction. PLS optimizes on lines of a stochastic loss function by approximating it with a Gaussian Process surrogate and exploiting a probabilistic formulation of the Wolf conditions. Both approaches show that they can optimize successfully on several machine learning problems and can compete against plain SGD. From the perspective of assumptions about the shape of the loss landscape, second order methods such as oLBFGS [53], KFRA [7], L-SR1 [45], QUICKPROP [15], S-LSR1 [4], and KFAC [39] generally assume that the loss function can be approximated locally by a parabola of the same dimension as the loss function. Adaptive methods such as SGD with momentum [49], ADAM [30], ADAGRAD [14], ADABOUND [37], AMSGRAD [47] or RMSProp [57] focus more on the handling of noise than on shape assumptions. In addition, methods exist that approximate the loss function in specific directions: The L4 adaptation scheme [50] as well as ALIG [5] estimate step sizes by approximating the loss function linearly in negative gradient direction, whereas our approach approximates the loss function parabolically in negative gradient direction. Finally, COCOB [42] has to be mentioned, an alternative learning rate free approach, which automatically estimates step directions and sizes with a reward based coin betting concept. 3 Empirical analysis of the shape of batch losses on vertical cross sections In this section we analyze line functions (see Eq. 1) during the training of multiple architectures and show that they locally exhibit mostly convex shapes, which are well suited for parabolic approximations. We focus on CIFAR-10, as it is extensively analyzed in optimization research for deep learning. However, on random samples of MNIST, CIFAR-100 and ImageNet we observed the same results. We analyzed cross sections of 4 common used architectures in detail. To do so, we evaluated the cross sections of the first 10000 update steps for each architecture. For each cross section we sampled 50 losses and performed a parabolic approximation (see Section 4). An unbiased selection of our results on a ResNet32 is shown in Figure 1. Further results are given in Appendix A. In accordance with [59], we conclude that the analyzed cross sections tend to be locally convex. In addition, one-dimensional parabolic approximations of the form f(s) = as2 + bs+ c with a 6= 0 are well suited to estimate the position of a minimum on such cross sections. To substantiate the later observation, we analyzed the angle between the line direction and the gradient at the estimated minimum during training. A position is a local extremum or saddle point of the cross section if and only if the angle between the line direction and the gradient at the position is 90◦, if measured on the same batch. 1 As shown in Figures 2 and 3, this property holds well for several architectures trained on MNIST, CIFAR-10, CIFAR-100 and ImageNet. The property fits best for MNIST and gets worse for more complex tasks such as ImageNet. We have to note, that measuring step sizes and update step adaptations factors (see Sections 4.1 and4.3) were chosen to fit the line functions decently. We can ensure that the extrema found are minima, since we additionally plotted the line function for each update step. In addition, we analyzed vertical cross sections in conjugate like directions and random directions. Vertical cross section in conjugate like directions also tend to have convex shapes (see Appendix D.4 Figure 17 ). However, vertical cross sections in random directions rarely exhibit convex shapes. Figure 2: Angles between the line direction and the gradient at the estimated minimum measured on the same batch. If the angle is 90◦, the estimated minimum is a real local minimum. We know from additional line plots that the found extrema or saddle points are minima. Left: measurement over the first 10 epochs. Right: measurement over the first 60 epochs. Update step adaptation (see Section 4.3) is applied. 1This holds because if the directional derivative of the measured gradient in line direction is 0, the current position is an extremum or saddle point of the cross sections and the angle is 90◦. If the position is not a extremum or saddle point, the directional derivative is not 0 [28]. 4 The line search algorithm By exploiting the property, that parabolic approximations are well suited to estimate the position of minima on line functions, we introduce Parabolic Approximation Line Search (PAL). This simple approach combines well-known methods from basic optimization such as parabolic approximation and line search [28], to perform an efficient line search. We note, that the general idea of this method can be applied to any optimizer that provides an update step direction. 4.1 Parameter update rule An intuitive explanation of PAL’s parameter update rule based on a parabolic approximation is given in Figure 4. Since lt(s) (see Eq.1) is assumed to exhibit a convex and almost parabolic shape, we approximate it with l̂t(s) = as2 + bs + c with a 6= 0 and a, b, c ∈ R. Consequently, we need three measurements to define a, b and c. Those are given by the current loss lt(0), the derivative ||gt|| ) where gt is ∇θtL(xt; θt). The red curve is its parabolic approximation l̂(s). With l(0), l(µ) and gt (orange), we have the three parameters needed to determine the update step supd to the minimum of the parabolic approximation. in gradient direction l′t(0) = −||gt|| (see Eq. 4) and an additional loss lt(µ) with measuring distance µ ∈ R+. We get a = lt(µ)−lt(0)−l ′ t(0)µ µ2 , b = l′t(0), and c = lt(0). The update step supd to the minimum of the parabolic approximation l̂t(s) is thus given by: supdt = − l̂′t(0) l̂′′t (0) = − b 2a = −l′t(0) 2 lt(µ)−lt(0)−l′t(0)µ µ2 (2) Note, that l̂′′t (0) is the second derivative of the approximated parabola and is only identical to the exact directional derivative −gt ||gt||H(L(xt; θt)) −gTt ||gt|| if the parabolic approximation fits. The normalization of the gradient to unit length (Eq.1) was chosen to have the measuring distance µ independent of the gradient size and of weight scaling. Note that two network inferences are required to determine lt(0) and lt(µ). Consequently, PAL needs two forward passes and one backward pass through a model. Further on, the batch loss L(xt; θt) may include random components, but, to ensure con- tinuity during one line search, drawn random numbers have to be reused for each value determination of L at t (e.g. for Dropout [55]. The memory required by PAL is similar to SGD with momentum, since only the last update direction has to be saved. A basic, well performing version of PAL is given in Algorithm 1. Algorithm 1 The basic version of our proposed line search algorithm. See Section 4 for details. Input: µ: measuring step size Input: L(x; θ): loss function Input: x: list of input vectors Input: θ0: initial parameter vector 1: t← 0 2: while θt not converged do 3: l0 ← L(xt; θt) # l0 = lt(0) see Eq. 1 4: gt ← −∇θtL(xt; θt) 5: lµ ← L(xt; θt + µ gt||gt|| ) 6: b← −||gt|| 7: a← lµ−l0−bµµ2 8: if proper curvature then 9: supd ← − b2a 10: else 11: # set supd according to section 4.2 12: end if 13: θt+1 ← θt + supd gt||gt|| 14: t← t+ 1 15: end while 16: return θt 4.2 Case discrimination of parabolic approximations Since not all parabolic approximations are suitable for parameter update steps, the following cases are considered separately. Note that b = l′t(0) and a = 0.5l ′′ t (0). 1: a > 0 and b < 0: parabolic approximation has a minimum in line direction, thus, the parameter update is done as described in Section 4.1. 2: a ≤ 0 and b < 0: parabolic approximation has a maximum in negative line direction, or is a line with negative slope. In those cases a parabolic approximation is inappropriate. supd is set to µ, since the second measured point has a lower loss than the first. 3: Since b = −||gt|| cannot be greater than 0, the only case left is an extremum at the current position (l′(0) = 0). In this case, no weight update is performed. However, the loss function is changed by the next batch. In accordance to Section 3, cases 2 and 3 appeared very rarely in our experiments. 4.3 Additions We introduce multiple additions for Algorithm 1 to fine tune the performance and handle degenerate cases. We emphasize that our hyperparameter sensitivity analysis (Appendix D.6) suggests that the influence of the introduced hyperparameters on the optimizer’s performance are low. Thus, they only need to be adapted to fine tune the results. The full version of PAL including all additions is given in Appendix B Algorithm 2. Direction adaptation: Instead of following the direction of the negative gradient we follow an adapted conjugate-like direction dt: dt = −∇θtL(xt; θt) + βdt−1 d0 = −∇θ0L(x0; θ0) (3) with β ∈ [0, 1]. Since now an adapted direction is used, l′t(0) changes to: l′t(0) = ∇θtL(xt; θt) dt ||dt|| (4) This approach aims to find a more optimal search direction than the negative gradient. We implemented and tested the formulas of Fletcher-Reeves [16], Polak-Ribière [48], Hestenes-Stiefel [24] and Dai-Yuan [11] to determine conjugate directions under the assumption that the loss function is a quadratic. However, choosing a constant β of value 0.2 or 0.4 performs equally well. The influence of β and dynamic update steps on PAL’s performance is discussed in Appendix D.5. In the analyzed scenario β can both increase and decrease the performance, whereas, dynamic update steps mostly increase the performance. The combination of both is needed to achieve optimal results. Update step adaptation: Our preliminary experiments revealed a systematic error caused by constantly approximating with slightly too narrow parabolas. Therefore, supd is multiplied by a parameter α ≥ 1 (compare to Eq. 2). This is useful to estimate the position of the minimum on a line more exactly, but has minor effects on training performance. Maximum step size: To hinder the algorithm from failing due to inaccurate parabolic approximations, we use a maximum step size smax. The new update step is given by min(supd, smax). However, most of our experiments with smax = 100.5 ≈ 3.16 never reached this step size and still performed well. 4.4 Theoretical considerations Usually, convergence in deep learning is shown for convex stochastic functions with a L-Lipschitz continuous gradient. However, since our approach originates from empirical results, it is not given that a profound theoretical analysis is possible. In order to show any convergence guarantees for parabolic approximations, we have to fall back to uncommonly strong assumptions which lead to quadratic models. Since convergence proofs on quadratics are of minor importance for most readers, our derivations can be found in Appendix C. 5 Evaluation 5.1 Experimental design We performed a comprehensive evaluation to analyze the performance of PAL on a variety of deep learning optimization tasks. Therefore, we tested PAL on commonly used architectures on CIFAR10 [31], CIFAR-100 [31] and ImageNet [13]. For CIFAR-10 and CIFAR-100, we evaluated on DenseNet40 [25], EfficientNetB0 [56], ResNet32 [23] and MobileNetV2 [52]. On ImageNet we evaluated on DenseNet121 and ResNet50. In addition, we considered an RNN trained on the Tolstoi war and peace text prediction task. We compare PAL to SLS [58], whose Armijo variant is state-of-theart in the line search field for DNNs. In addition, we compare against the following well studied and widely used first order optimizers: SGD with momentum [49], ADAM [30], and RMSProp [57] as well as against SGDHD [3], ALIG [5], which automatically estimate learning rates in negative gradient direction and, finally, against the coin betting approach COCOB [42]. To perform a fair comparison, we compared a variety of hyperparameter combinations of commonly used hyperparameters for each optimizer. In addition, we utilize those combinations to analyze the hyperparameter sensitivity for each optimizer. Since a grid search on Imagenet was too expensive, the best hyperparameter configuration from the CIFAR-100 evaluation was used to test hyperparameter transferability. A detailed explanation of the experiments including hyperparameters and data augmentations used are given in Appendix D.8. All in all, we trained over 4500 networks with Tensorflow 1.15 [1] on Nvidia Geforce GTX 1080 TI graphic cards. Since PAL is a line search approach, the predefined learning rate schedules of SGD and the generated schedules of SLS, ALIG, SGDHD and PAL were compared. Due to normalization, PAL’s learning rate is given by supdt/||dt||. 5.2 Results A selection of our results is given in Figure 5. The results of other architectures trained on CIFAR-10, CIFAR-100, Imagenet and Tolstoi are found in Appendix D Figures 13,14,15. A table with exact numerical results of all experiments is provided in Appendix D.9. In most cases PAL decreases the training loss faster and to a lower value than the other optimizers (row 1 of Figures 5,13,14,15). Considering validation and test accuracy, PAL surpasses ALIG, SGDHD and COCOB, competes with RMSProp and ADAM but gets surpassed by SGD (rows 2,3 of Figures 5,13,14,15). However, RMSProp, ADAM and SGD were tuned with a step size schedule. If we compare PAL to their basic implementations without a schedule, which roughly corresponds to the first plateau reached in row 2 of Figures 5,13,14,15, PAL would surpass the other optimizers and shows that it can find a well performing step size schedule. This is especially interesting for problems for which default schedules might not work. SLS decreases the training loss further than the other optimizers on a few problems, but shows weak performance and poor generalization on most. This contrasts to the results of [58], where SLS behaves robustly and excels. To exclude the possibility of errors on our side, we reimplemented SLS experiment on ResNet34 and could reproduce a similar well performance as in [58] (Appendix D.3). Our results suggest, that the interpolation assumption on which SLS is based, is not always valid for the considered tasks. Considering the box plots of Figures 5 and 14, which represent the sensitivity to hyperparameter combinations, one would likely try on a new unknown objective, we can see, that PAL has a strong tendency to exhibit low sensitivity in combination with good performance. To emphasize this statement, a sensitivity analysis of PAL’s hyperparameters (Appendix Figure 19) shows that PAL performs well on a wide range for each hyperparameter on a ResNet32. On wall-clock-time PAL performs as fast as SLS but slower than the other optimizers, which achieve similar speeds (Appendix D.2). However, depending on the scenario, an automatic, well performing leaning rate schedule might compensate for the slower speed. Considering the learning rate schedules of PAL (row 4 of Figures 5,13,14,15) we achieved unexpected results. PAL, which estimates the learning rate directly from approximated local shape information, does not follow a schedule that is similar to the one of SLS, ALIG, SGDHD or any of the common used hand crafted schedules such as piece wise constant or cosine decay. However, it achieves similar results. An interesting side result is that ALIG and SGDHD tend to perform best, if hyperparameters are chosen in a way that the learning rate is only changed slightly and therefore virtually an SGD training with fixed learning rate is performed. 6 On the exactness of line searches on batch losses In this section we investigate the general question whether line searches which estimate the location of the minimum of batch losses exactly are beneficial. In Figure 2 we showed that PAL can perform an almost exact line search on batch losses if we use a fixed update step adaptation factor (Section 4.3). However, PAL’s best hyperparameter configuration does not perform an exact line search (see Figure 6). Consequently, we analyzed how an exact line search, which exactly estimates a minimum of the line function, behaves. We implemented an inefficient binary line search (see Appendix E), which measured up to 20 values on each line to estimate the position of a minimum. The results, given in Figure 6, show that an optimal line search does not optimize well. Thus, the reason why PAL performs well is not the exactness of its update steps. In fact, slightly inexact update steps seem to be beneficial. These results query Assumption 1, which assumes that the position of a minimum on a line in negative gradient direction of the batch loss L(xt; θ) is a suitable estimator for the minimum of the empirical loss L on this line to perform a successful optimization process. To investigate this further, we tediously measured the empirical loss L and the distribution of batch losses for one training process on a ResNet32. Our results suggest, as exemplary shown in Figure 7, that on a line function defined by the gradient of L(xt; θ), the position of the minimum of L(xt; θ) is not always a good estimator for the position of the minimum of the empirical loss L. This explains why exact line searches on the batch loss perform weak. Corollaries are that the empirical loss on the investigated lines also tends to be locally convex and that the optimal step size tends to be smaller than the step size given by the batch loss on such lines. This is a possible explanation why the slightly too narrow parabolic approximations of PAL without update step adaptation perform well. 7 PAL and Interpolation This section analyzes whether the reason why PAL performs well is related to the interpolation condition. Formally, interpolation requires that the gradient with respect to each sample converges to zero at the optimum. We repeated the experiments of the SLS paper (see [58] Section 7.2 and 7.3), which analyze the performance on problems for which interpolation hold or does not hold. Figure 8 shows that PAL such as SLS converge faster to an artificial optimization floor on nonover-parameterized models (k = 4) of the matrix factorization problem of [58] Section 7.2. In the interpolation case PAL and SLS converge linearly to machine precision. On the binary classification problem of [58] Section 7.3, which uses a softmax loss and RBF kernels on the mushrooms and ijcnn datasets, we observe that PAL and SLS converge fast on the mushrooms task, for which the interpolation condition holds (Figure 9). However, PAL converges faster on the ijcnn task, for which the interpolation condition does not hold. The results indicate that the interpolation condition is beneficial for PAL, but, PAL performs also robust when it is likely not satisfied (see Figure 5,13,14,15. In those experiments PAL mostly performs competitive but SLS does not. However, the relation of the parabolic property to interpolation needs to be investigated more closely in future. 8 Conclusions This work tackles a major challenge in current optimization research for deep learning: to automatically find optimal step sizes for each update step. In detail, we focus on line search approaches to deal with this challenge. We introduced a simple, robust and competitive line search approach based on one-dimensional parabolic approximations of batch losses. The introduced algorithm is an alternative to SGD for objectives where default decays are unknown or do not work. Loss functions of DNNs are commonly perceived as being highly non-convex. Our analysis suggests that this intuition does not hold locally, since lines of loss landscapes across models and datasets can be approximated parabolically to high accuracy. This new knowledge might further help to explain why update steps of specific optimizers perform well. To gain deeper insights of line searches in general, we analyzed how an expensive but exact line search on batch losses behaves. Intriguingly, its performance is weak, which lets us conclude that the small inaccuracies of the parabolic approximations are beneficial for training. Potential Broader Impact Since we understand our work as basic research, it is extremely error-prone to estimate its specific ethical aspects and future positive or negative social consequences. As optimization research influences the whole field of deep learning, we refer to the following works, which discuss the ethical aspects and social consequences of AI and Deep Learning in a comprehensive and general way: [6, 41, 61]. Acknowledgments Maximus Mutschler heartly thanks Lydia Federmann, Kevin Laube, Jonas Tebbe, Mario Laux, Valentin Bolz, Hauke Neitzel, Leon Varga, Benjamin Kiefer, Timon Höfer, Martin Meßmer, Cornelia Schulz, Hamd Riaz, Nuri Benbarka, Samuel Scherer, Frank Schneider, Robert Geirhos and Frank Hirschmann for their comprehensive support. Funding This research was supported by the German Federal Ministry of Education and Research (BMBF) project ’Training Center Machine Learning, Tübingen’ with grant number 01|S17054.
1. What is the focus of the paper regarding semantic correspondence? 2. What are the strengths and weaknesses of the proposed approach in terms of neural representation? 3. Do you have concerns about the semantic correspondence representation? 4. What are the limitations of NeMF in dealing with different image pairs? 5. How does NeMF differ from other semantic correspondence methods?
Summary and Contributions Strengths Weaknesses
Summary and Contributions After considering the authors, response (rebuttal) I have a slightly more positive view of the paper, but the increase was not enough to change my overall score of the paper. This paper presents a parabolic-approximation line search approach to chose step-sizes and that is claimed to be suitable for deep learning optimization. The major contributions include: Contribution 1: Empirical analysis suggesting convexity of the loss function in the negative gradient direction and that parabolic approximations of the empirical loss function are well suited to estimate the minima in these directions. (significance: medium) Contribution 2: A-line search procedure based on the parabolic fit to the loss function at one point in addition to the current point along the current step direction that can be used to compute a "good" step-size. (significance: medium) Contribution 3: Empirical comparison of the proposed method with known step-size schedules for first-order methods including SGD, Adam, RMSP, and a stochastic line search method proposed in [54]. (significance: medium) Contribution 4: Convergence analysis under very strong assumptions. (significance: very low) Strengths - Significance and novelty of the contributions: A ) Contribution 1: Empirical evidence of the observed convexity property is presented for classification tasks on CIFAR-10 using ResNet32, DenseNet40, MobileNetV2, and EfficientNet. The convexity property aspect was also observed in the Probabilistic Line Search paper [36] on MNIST but was not been extensively studied empirically in that work. B ) Contribution 2: The novelty of the proposed line search procedure is that it requires computing only one additional sample loss measurement (in addition to the loss function value and, the gradient at the current point) to fit parabolic approximation and estimate the step-size. We note that using a parabolic fit is a standard approach in line search procedures in deterministic optimization. B ) Contribution 3: The method is compared to SGD, Adam, RMSP with known "good" schedules and SLS method on CIFAR-10, CIFAR-100, and ImageNet classification tasks using various architectures. The paper documents the ability of the method to pick good learning rates, decreasing the training loss faster (iteration-wise) than SGD, Adam, and RMSP in some settings. However, it never outperforms SGD in terms of val-accuracy. An RNN task on TOLSTOI is also presented in the appendix. B ) Contribution 4: The analysis seems sound to me, although it requires very strong assumptions. - Relevance to the NeurIPS community: Clearly related to the NeurIPS community. Choosing step-size schedules is vital for deep learning and allows for faster tuning of models. This work aims at providing a method to automatically select step-sizes. Weaknesses - Weaknesses and limitations of the contributions: A ) Contribution 1: Empirical evidence of the observed convex parabolic shape is only presented for classification tasks on one dataset CIFAR-10. Since this is the main contribution of the paper, other deep learning tasks and loss functions should be explored. Other datasets should also be studied to support the claim. (The author's feedback has adequately addressed this issue.) B ) Contribution 2: The method introduces new hyperparameters (update step adaptation, measuring step size, maximal step size). The sensitivity study in Figure 14 is not convincing for the following reasons: - it's unclear to me how different combinations of these parameters would perform based on that figure. - Although the gradient is normalized, I suspect the measuring step size value will still depend on the scale of the problem and I am concerned that the proposed range in Figure 14 is only adapted to ResNet32 trained on the CIFAR-10 problem. The method is claimed to be generalizable to any step direction, but no empirical evidence is presented to back up this up. It would be interesting to see how the proposed step size procedure would perform on SGD, with or without momentum and Adam directions. (The author's feedback has addressed this issue.) B ) Contribution 3: The authors used a so-called conjugate gradient method to pick the search direction, making it difficult to access whether PAL picks good learning rates based on the figures. It would be more convincing to compare the learning rates obtained by PAL using SGD-with-momentum (or any other algorithm) directions to the optimal learning rate schedule for the same algorithm. - Validation accuracy is consistently lower than SGD across the presented problems. - No plots comparing CPU times are included in the experiments. Computing an additional evaluation of the loss function on every step requires an additional forward pass and how this effects the total run time should be presented. - Comparison with PLS is not included. (The author's feedback has adequately addressed this issue.) - Comparison with second-order optimizers is not included. B ) Contribution 4: The convergence proof is provided under strong assumptions (parabolic shape + same Q matrix for individual-loss) which are, as mentioned by the authors, not valid for general deep learning scenarios.
NIPS
Title Parabolic Approximation Line Search for DNNs Abstract A major challenge in current optimization research for deep learning is to automatically find optimal step sizes for each update step. The optimal step size is closely related to the shape of the loss in the update step direction. However, this shape has not yet been examined in detail. This work shows empirically that the batch loss over lines in negative gradient direction is mostly convex locally and well suited for one-dimensional parabolic approximations. By exploiting this parabolic property we introduce a simple and robust line search approach, which performs loss-shape dependent update steps. Our approach combines well-known methods such as parabolic approximation, line search and conjugate gradient, to perform efficiently. It surpasses other step size estimating methods and competes with common optimization methods on a large variety of experiments without the need of hand-designed step size schedules. Thus, it is of interest for objectives where step-size schedules are unknown or do not perform well. Our extensive evaluation includes multiple comprehensive hyperparameter grid searches on several datasets and architectures. Finally, we provide a general investigation of exact line searches in the context of batch losses and exact losses, including their relation to our line search approach. 1 Introduction Automatic determination of optimal step sizes for each update step of stochastic gradient descent is a major challenge in current optimization research for deep learning [3,5,12,29,38,43,46,50,58]. One default approach to tackle this challenge is to apply line search methods. Several of these have been introduced for Deep Learning [12, 29, 38, 43, 58]. However, these approaches have not analyzed the shape of the loss functions in update step direction in detail, which is important, since the optimal step size stands in strong relation to this shape. To shed light on this, our work empirically analyses the shape of the loss function in update step direction for deep learning scenarios often considered in optimization. We further elaborate the properties found to define a simple, competitive, empirically justified optimizer. Our contributions are as follows: 1: Empirical analysis suggests that the loss function in negative gradient direction mostly shows locally convex shapes. Furthermore, we show that parabolic approximations are well suited to estimate the minima in these directions (Section 3). 2: Exploiting the parabolic property, we build a simple line search optimizer which constructs its own loss function dependent learning rate schedule. The performance of our optimization method is extensively analyzed, including a comprehensive comparison to other optimization methods (Sections 4,5). 3: We provide a convergence analysis which backs our empirical results, under strong assumptions (Section 4.4). 4: We provide a general investigation of exact line searches on batch losses and their relation to line searches on the exact loss as well as their relation to our line search approach (Section 6) and, finally, analyze the relation of our approach to interpolation (Section 7). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. The empirical loss L is defined as the average over realizations of a batch-wise loss function L: L(θ) : Rm → R, θ 7→ n−1 ∑n i=1 L(xi; θ) with n being the amount of batches, xi denotes a batch of a dataset and θ ∈ Rm denotes the parameters to be optimized. Note, that we consider a sample as one batch of multiple inputs. We denote L(xt; θt) the batch loss of a batch x at optimization step t. In this work, we consider L(xt; θt) in negative gradient direction: lt(s) : R→ R, s 7→ L(xt; θt + s · −gt ||gt|| ) (1) where gt is∇θtL(xt; θt). For simplification, we denote lt(s) a line function or vertical cross section and s a step on this line. The motivation of our work builds upon the following assumption: Assumption 1. (Informal) The position θmin = θt + smin −gt||gt|| of a minimum of lt is a well enough estimator for the position of the minimum of the empirical loss L on the same line to perform a successful optimization process. We empirically analyze Assumption 1 further in section 6. 2 Related work Our optimization approach is based on well-known methods, such as line search, the non linear conjugate gradient method and quadratic approximation, which can be found in Numerical Optimization [28], which, in addition, describes a similar line search routine for the deterministic setting. The concept of parabolic approximations is also exploited by the well known line search of More and Thunte [40]. Our work contrasts common optimization approaches in deep learning by directly exploiting the parabolic property (see Section 3) of vertical cross sections of the batch loss. Similarly, SGD-HD [3] performs update steps towards the minimum on vertical cross sections of the batch loss, by performing gradient descent on the learning rate. Concurrently, [10] explored a similar direction as this work by analyzing possible line search approximations for DNN loss landscapes, but does not exploit these for optimization. The recently published Stochastic Line-Search (SLS) [58] is an optimized backtracking line search based on the Armijo condition, which samples, like our approach, additional batch losses from the same batch and checks the Armijo condition on these. [58] assumes that the model interpolates the data. Formally, this implies that the gradient at a minimum of the empirical loss is 0 for the empirical loss as well as for all batch (sample) losses. [12] also uses a backtracking Armijo line search, but with the aim to regulate the optimal batch size. SLS exhibits competitive performance against multiple optimizers on several DNN tasks. [43] introduces a related idea but does not provide empirical results for DNNs. The methodically appealing but complex Probabilistic Line Search (PLS) [38] and Gradient Only Line Search (GOLS1) [29] are considering a discontinuous stochastic loss function. GOLS1 searches for a minimum on lines by searching for a sign change of the first directional derivative in search direction. PLS optimizes on lines of a stochastic loss function by approximating it with a Gaussian Process surrogate and exploiting a probabilistic formulation of the Wolf conditions. Both approaches show that they can optimize successfully on several machine learning problems and can compete against plain SGD. From the perspective of assumptions about the shape of the loss landscape, second order methods such as oLBFGS [53], KFRA [7], L-SR1 [45], QUICKPROP [15], S-LSR1 [4], and KFAC [39] generally assume that the loss function can be approximated locally by a parabola of the same dimension as the loss function. Adaptive methods such as SGD with momentum [49], ADAM [30], ADAGRAD [14], ADABOUND [37], AMSGRAD [47] or RMSProp [57] focus more on the handling of noise than on shape assumptions. In addition, methods exist that approximate the loss function in specific directions: The L4 adaptation scheme [50] as well as ALIG [5] estimate step sizes by approximating the loss function linearly in negative gradient direction, whereas our approach approximates the loss function parabolically in negative gradient direction. Finally, COCOB [42] has to be mentioned, an alternative learning rate free approach, which automatically estimates step directions and sizes with a reward based coin betting concept. 3 Empirical analysis of the shape of batch losses on vertical cross sections In this section we analyze line functions (see Eq. 1) during the training of multiple architectures and show that they locally exhibit mostly convex shapes, which are well suited for parabolic approximations. We focus on CIFAR-10, as it is extensively analyzed in optimization research for deep learning. However, on random samples of MNIST, CIFAR-100 and ImageNet we observed the same results. We analyzed cross sections of 4 common used architectures in detail. To do so, we evaluated the cross sections of the first 10000 update steps for each architecture. For each cross section we sampled 50 losses and performed a parabolic approximation (see Section 4). An unbiased selection of our results on a ResNet32 is shown in Figure 1. Further results are given in Appendix A. In accordance with [59], we conclude that the analyzed cross sections tend to be locally convex. In addition, one-dimensional parabolic approximations of the form f(s) = as2 + bs+ c with a 6= 0 are well suited to estimate the position of a minimum on such cross sections. To substantiate the later observation, we analyzed the angle between the line direction and the gradient at the estimated minimum during training. A position is a local extremum or saddle point of the cross section if and only if the angle between the line direction and the gradient at the position is 90◦, if measured on the same batch. 1 As shown in Figures 2 and 3, this property holds well for several architectures trained on MNIST, CIFAR-10, CIFAR-100 and ImageNet. The property fits best for MNIST and gets worse for more complex tasks such as ImageNet. We have to note, that measuring step sizes and update step adaptations factors (see Sections 4.1 and4.3) were chosen to fit the line functions decently. We can ensure that the extrema found are minima, since we additionally plotted the line function for each update step. In addition, we analyzed vertical cross sections in conjugate like directions and random directions. Vertical cross section in conjugate like directions also tend to have convex shapes (see Appendix D.4 Figure 17 ). However, vertical cross sections in random directions rarely exhibit convex shapes. Figure 2: Angles between the line direction and the gradient at the estimated minimum measured on the same batch. If the angle is 90◦, the estimated minimum is a real local minimum. We know from additional line plots that the found extrema or saddle points are minima. Left: measurement over the first 10 epochs. Right: measurement over the first 60 epochs. Update step adaptation (see Section 4.3) is applied. 1This holds because if the directional derivative of the measured gradient in line direction is 0, the current position is an extremum or saddle point of the cross sections and the angle is 90◦. If the position is not a extremum or saddle point, the directional derivative is not 0 [28]. 4 The line search algorithm By exploiting the property, that parabolic approximations are well suited to estimate the position of minima on line functions, we introduce Parabolic Approximation Line Search (PAL). This simple approach combines well-known methods from basic optimization such as parabolic approximation and line search [28], to perform an efficient line search. We note, that the general idea of this method can be applied to any optimizer that provides an update step direction. 4.1 Parameter update rule An intuitive explanation of PAL’s parameter update rule based on a parabolic approximation is given in Figure 4. Since lt(s) (see Eq.1) is assumed to exhibit a convex and almost parabolic shape, we approximate it with l̂t(s) = as2 + bs + c with a 6= 0 and a, b, c ∈ R. Consequently, we need three measurements to define a, b and c. Those are given by the current loss lt(0), the derivative ||gt|| ) where gt is ∇θtL(xt; θt). The red curve is its parabolic approximation l̂(s). With l(0), l(µ) and gt (orange), we have the three parameters needed to determine the update step supd to the minimum of the parabolic approximation. in gradient direction l′t(0) = −||gt|| (see Eq. 4) and an additional loss lt(µ) with measuring distance µ ∈ R+. We get a = lt(µ)−lt(0)−l ′ t(0)µ µ2 , b = l′t(0), and c = lt(0). The update step supd to the minimum of the parabolic approximation l̂t(s) is thus given by: supdt = − l̂′t(0) l̂′′t (0) = − b 2a = −l′t(0) 2 lt(µ)−lt(0)−l′t(0)µ µ2 (2) Note, that l̂′′t (0) is the second derivative of the approximated parabola and is only identical to the exact directional derivative −gt ||gt||H(L(xt; θt)) −gTt ||gt|| if the parabolic approximation fits. The normalization of the gradient to unit length (Eq.1) was chosen to have the measuring distance µ independent of the gradient size and of weight scaling. Note that two network inferences are required to determine lt(0) and lt(µ). Consequently, PAL needs two forward passes and one backward pass through a model. Further on, the batch loss L(xt; θt) may include random components, but, to ensure con- tinuity during one line search, drawn random numbers have to be reused for each value determination of L at t (e.g. for Dropout [55]. The memory required by PAL is similar to SGD with momentum, since only the last update direction has to be saved. A basic, well performing version of PAL is given in Algorithm 1. Algorithm 1 The basic version of our proposed line search algorithm. See Section 4 for details. Input: µ: measuring step size Input: L(x; θ): loss function Input: x: list of input vectors Input: θ0: initial parameter vector 1: t← 0 2: while θt not converged do 3: l0 ← L(xt; θt) # l0 = lt(0) see Eq. 1 4: gt ← −∇θtL(xt; θt) 5: lµ ← L(xt; θt + µ gt||gt|| ) 6: b← −||gt|| 7: a← lµ−l0−bµµ2 8: if proper curvature then 9: supd ← − b2a 10: else 11: # set supd according to section 4.2 12: end if 13: θt+1 ← θt + supd gt||gt|| 14: t← t+ 1 15: end while 16: return θt 4.2 Case discrimination of parabolic approximations Since not all parabolic approximations are suitable for parameter update steps, the following cases are considered separately. Note that b = l′t(0) and a = 0.5l ′′ t (0). 1: a > 0 and b < 0: parabolic approximation has a minimum in line direction, thus, the parameter update is done as described in Section 4.1. 2: a ≤ 0 and b < 0: parabolic approximation has a maximum in negative line direction, or is a line with negative slope. In those cases a parabolic approximation is inappropriate. supd is set to µ, since the second measured point has a lower loss than the first. 3: Since b = −||gt|| cannot be greater than 0, the only case left is an extremum at the current position (l′(0) = 0). In this case, no weight update is performed. However, the loss function is changed by the next batch. In accordance to Section 3, cases 2 and 3 appeared very rarely in our experiments. 4.3 Additions We introduce multiple additions for Algorithm 1 to fine tune the performance and handle degenerate cases. We emphasize that our hyperparameter sensitivity analysis (Appendix D.6) suggests that the influence of the introduced hyperparameters on the optimizer’s performance are low. Thus, they only need to be adapted to fine tune the results. The full version of PAL including all additions is given in Appendix B Algorithm 2. Direction adaptation: Instead of following the direction of the negative gradient we follow an adapted conjugate-like direction dt: dt = −∇θtL(xt; θt) + βdt−1 d0 = −∇θ0L(x0; θ0) (3) with β ∈ [0, 1]. Since now an adapted direction is used, l′t(0) changes to: l′t(0) = ∇θtL(xt; θt) dt ||dt|| (4) This approach aims to find a more optimal search direction than the negative gradient. We implemented and tested the formulas of Fletcher-Reeves [16], Polak-Ribière [48], Hestenes-Stiefel [24] and Dai-Yuan [11] to determine conjugate directions under the assumption that the loss function is a quadratic. However, choosing a constant β of value 0.2 or 0.4 performs equally well. The influence of β and dynamic update steps on PAL’s performance is discussed in Appendix D.5. In the analyzed scenario β can both increase and decrease the performance, whereas, dynamic update steps mostly increase the performance. The combination of both is needed to achieve optimal results. Update step adaptation: Our preliminary experiments revealed a systematic error caused by constantly approximating with slightly too narrow parabolas. Therefore, supd is multiplied by a parameter α ≥ 1 (compare to Eq. 2). This is useful to estimate the position of the minimum on a line more exactly, but has minor effects on training performance. Maximum step size: To hinder the algorithm from failing due to inaccurate parabolic approximations, we use a maximum step size smax. The new update step is given by min(supd, smax). However, most of our experiments with smax = 100.5 ≈ 3.16 never reached this step size and still performed well. 4.4 Theoretical considerations Usually, convergence in deep learning is shown for convex stochastic functions with a L-Lipschitz continuous gradient. However, since our approach originates from empirical results, it is not given that a profound theoretical analysis is possible. In order to show any convergence guarantees for parabolic approximations, we have to fall back to uncommonly strong assumptions which lead to quadratic models. Since convergence proofs on quadratics are of minor importance for most readers, our derivations can be found in Appendix C. 5 Evaluation 5.1 Experimental design We performed a comprehensive evaluation to analyze the performance of PAL on a variety of deep learning optimization tasks. Therefore, we tested PAL on commonly used architectures on CIFAR10 [31], CIFAR-100 [31] and ImageNet [13]. For CIFAR-10 and CIFAR-100, we evaluated on DenseNet40 [25], EfficientNetB0 [56], ResNet32 [23] and MobileNetV2 [52]. On ImageNet we evaluated on DenseNet121 and ResNet50. In addition, we considered an RNN trained on the Tolstoi war and peace text prediction task. We compare PAL to SLS [58], whose Armijo variant is state-of-theart in the line search field for DNNs. In addition, we compare against the following well studied and widely used first order optimizers: SGD with momentum [49], ADAM [30], and RMSProp [57] as well as against SGDHD [3], ALIG [5], which automatically estimate learning rates in negative gradient direction and, finally, against the coin betting approach COCOB [42]. To perform a fair comparison, we compared a variety of hyperparameter combinations of commonly used hyperparameters for each optimizer. In addition, we utilize those combinations to analyze the hyperparameter sensitivity for each optimizer. Since a grid search on Imagenet was too expensive, the best hyperparameter configuration from the CIFAR-100 evaluation was used to test hyperparameter transferability. A detailed explanation of the experiments including hyperparameters and data augmentations used are given in Appendix D.8. All in all, we trained over 4500 networks with Tensorflow 1.15 [1] on Nvidia Geforce GTX 1080 TI graphic cards. Since PAL is a line search approach, the predefined learning rate schedules of SGD and the generated schedules of SLS, ALIG, SGDHD and PAL were compared. Due to normalization, PAL’s learning rate is given by supdt/||dt||. 5.2 Results A selection of our results is given in Figure 5. The results of other architectures trained on CIFAR-10, CIFAR-100, Imagenet and Tolstoi are found in Appendix D Figures 13,14,15. A table with exact numerical results of all experiments is provided in Appendix D.9. In most cases PAL decreases the training loss faster and to a lower value than the other optimizers (row 1 of Figures 5,13,14,15). Considering validation and test accuracy, PAL surpasses ALIG, SGDHD and COCOB, competes with RMSProp and ADAM but gets surpassed by SGD (rows 2,3 of Figures 5,13,14,15). However, RMSProp, ADAM and SGD were tuned with a step size schedule. If we compare PAL to their basic implementations without a schedule, which roughly corresponds to the first plateau reached in row 2 of Figures 5,13,14,15, PAL would surpass the other optimizers and shows that it can find a well performing step size schedule. This is especially interesting for problems for which default schedules might not work. SLS decreases the training loss further than the other optimizers on a few problems, but shows weak performance and poor generalization on most. This contrasts to the results of [58], where SLS behaves robustly and excels. To exclude the possibility of errors on our side, we reimplemented SLS experiment on ResNet34 and could reproduce a similar well performance as in [58] (Appendix D.3). Our results suggest, that the interpolation assumption on which SLS is based, is not always valid for the considered tasks. Considering the box plots of Figures 5 and 14, which represent the sensitivity to hyperparameter combinations, one would likely try on a new unknown objective, we can see, that PAL has a strong tendency to exhibit low sensitivity in combination with good performance. To emphasize this statement, a sensitivity analysis of PAL’s hyperparameters (Appendix Figure 19) shows that PAL performs well on a wide range for each hyperparameter on a ResNet32. On wall-clock-time PAL performs as fast as SLS but slower than the other optimizers, which achieve similar speeds (Appendix D.2). However, depending on the scenario, an automatic, well performing leaning rate schedule might compensate for the slower speed. Considering the learning rate schedules of PAL (row 4 of Figures 5,13,14,15) we achieved unexpected results. PAL, which estimates the learning rate directly from approximated local shape information, does not follow a schedule that is similar to the one of SLS, ALIG, SGDHD or any of the common used hand crafted schedules such as piece wise constant or cosine decay. However, it achieves similar results. An interesting side result is that ALIG and SGDHD tend to perform best, if hyperparameters are chosen in a way that the learning rate is only changed slightly and therefore virtually an SGD training with fixed learning rate is performed. 6 On the exactness of line searches on batch losses In this section we investigate the general question whether line searches which estimate the location of the minimum of batch losses exactly are beneficial. In Figure 2 we showed that PAL can perform an almost exact line search on batch losses if we use a fixed update step adaptation factor (Section 4.3). However, PAL’s best hyperparameter configuration does not perform an exact line search (see Figure 6). Consequently, we analyzed how an exact line search, which exactly estimates a minimum of the line function, behaves. We implemented an inefficient binary line search (see Appendix E), which measured up to 20 values on each line to estimate the position of a minimum. The results, given in Figure 6, show that an optimal line search does not optimize well. Thus, the reason why PAL performs well is not the exactness of its update steps. In fact, slightly inexact update steps seem to be beneficial. These results query Assumption 1, which assumes that the position of a minimum on a line in negative gradient direction of the batch loss L(xt; θ) is a suitable estimator for the minimum of the empirical loss L on this line to perform a successful optimization process. To investigate this further, we tediously measured the empirical loss L and the distribution of batch losses for one training process on a ResNet32. Our results suggest, as exemplary shown in Figure 7, that on a line function defined by the gradient of L(xt; θ), the position of the minimum of L(xt; θ) is not always a good estimator for the position of the minimum of the empirical loss L. This explains why exact line searches on the batch loss perform weak. Corollaries are that the empirical loss on the investigated lines also tends to be locally convex and that the optimal step size tends to be smaller than the step size given by the batch loss on such lines. This is a possible explanation why the slightly too narrow parabolic approximations of PAL without update step adaptation perform well. 7 PAL and Interpolation This section analyzes whether the reason why PAL performs well is related to the interpolation condition. Formally, interpolation requires that the gradient with respect to each sample converges to zero at the optimum. We repeated the experiments of the SLS paper (see [58] Section 7.2 and 7.3), which analyze the performance on problems for which interpolation hold or does not hold. Figure 8 shows that PAL such as SLS converge faster to an artificial optimization floor on nonover-parameterized models (k = 4) of the matrix factorization problem of [58] Section 7.2. In the interpolation case PAL and SLS converge linearly to machine precision. On the binary classification problem of [58] Section 7.3, which uses a softmax loss and RBF kernels on the mushrooms and ijcnn datasets, we observe that PAL and SLS converge fast on the mushrooms task, for which the interpolation condition holds (Figure 9). However, PAL converges faster on the ijcnn task, for which the interpolation condition does not hold. The results indicate that the interpolation condition is beneficial for PAL, but, PAL performs also robust when it is likely not satisfied (see Figure 5,13,14,15. In those experiments PAL mostly performs competitive but SLS does not. However, the relation of the parabolic property to interpolation needs to be investigated more closely in future. 8 Conclusions This work tackles a major challenge in current optimization research for deep learning: to automatically find optimal step sizes for each update step. In detail, we focus on line search approaches to deal with this challenge. We introduced a simple, robust and competitive line search approach based on one-dimensional parabolic approximations of batch losses. The introduced algorithm is an alternative to SGD for objectives where default decays are unknown or do not work. Loss functions of DNNs are commonly perceived as being highly non-convex. Our analysis suggests that this intuition does not hold locally, since lines of loss landscapes across models and datasets can be approximated parabolically to high accuracy. This new knowledge might further help to explain why update steps of specific optimizers perform well. To gain deeper insights of line searches in general, we analyzed how an expensive but exact line search on batch losses behaves. Intriguingly, its performance is weak, which lets us conclude that the small inaccuracies of the parabolic approximations are beneficial for training. Potential Broader Impact Since we understand our work as basic research, it is extremely error-prone to estimate its specific ethical aspects and future positive or negative social consequences. As optimization research influences the whole field of deep learning, we refer to the following works, which discuss the ethical aspects and social consequences of AI and Deep Learning in a comprehensive and general way: [6, 41, 61]. Acknowledgments Maximus Mutschler heartly thanks Lydia Federmann, Kevin Laube, Jonas Tebbe, Mario Laux, Valentin Bolz, Hauke Neitzel, Leon Varga, Benjamin Kiefer, Timon Höfer, Martin Meßmer, Cornelia Schulz, Hamd Riaz, Nuri Benbarka, Samuel Scherer, Frank Schneider, Robert Geirhos and Frank Hirschmann for their comprehensive support. Funding This research was supported by the German Federal Ministry of Education and Research (BMBF) project ’Training Center Machine Learning, Tübingen’ with grant number 01|S17054.
1. What is the main contribution of the paper, and what are its strengths and weaknesses? 2. What is PAL, and how does it perform compared to other optimizers? 3. What is the purpose of the theoretical analysis in Section 4.4, and what are some issues with the assumptions made? 4. How does the choice of quadratic approximation impact the results, and why is it important to consider local convexity? 5. What is the significance of exact line searches on subsampled losses, and how do they relate to PAL's performance?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents an interesting empirical analysis on the loss landscape of commonly-used deep learning loss functions. It introduces a novel algorithm called PAL that automatically sets the step size. PAL’s step size is obtained from a univariate quadratic approximation of the loss along the (subsampled) negative gradient direction. The choice of quadratic approximation is based on the empirical observation that locally, the subsampled loss along the subsampled negative gradient direction is often convex. In most experiments presented, PAL performs better than popular stochastic first-order optimizers, including the state-of-the-art stochastic line search algorithm. The authors provide some theoretical justification to PAL, although the applicability of the theory is very limited. Finally, an investigation of exact line searches on the subsampled losses is conducted, which empirically shows that PAL can often find a local minimum of the univariate function, so descent is often guaranteed on the subsampled loss. Strengths The paper is strong in the following aspects: 1. The proposed algorithm PAL performs quite well in terms of convergence rate and generalization properties on multiple deep learning benchmarks. 2. Although PAL requires two forward passes and one backward pass at every iteration to compute its step size, it is nonetheless a fixed cost as it does not require backtracking, which may require more than two forward passes. This makes PAL easier to implement and can be cheaper in terms of computation, compared to line search methods that rely on backtracking. 3. The authors point out (in line 128) that when multiple forward passes are required, model evaluation with random components such as dropout needs to re-use the random state to ensure the same batch loss function is being computed. This is a valuable reminder for readers who wish to use these algorithms. Weaknesses The main weaknesses of this paper lie in the soundness of the claims and the clarity of the presentation (see Clarity section for the latter). The major issues are: 1. In section 4.3, the authors introduce “conjugate directions” (Eq.3) to their update, which is essentially just the SGD update with a heavy-ball momentum term. The connection to conjugate gradient is also unjustified. Two directions u and v are said to be conjugate with respect to some positive definite matrix A if <u, Av> = 0. The authors claim that d_t defined in Eq.3 is a conjugate direction, but did not show this for the successive updates, if it is even possible. 2. Related to the above, it is also not clear why the momentum term is needed. This work presents an empirical analysis of the convex loss landscape in the negative (subsampled) gradient direction; however, the loss landscape in the direction with momentum does not seem to be explored. 3. It is also unclear whether this momentum term could be a confounding factor in the comparison between PAL and SLS, as the vanilla version of SLS is just stochastic line search applied to SGD without momentum. 4. In line 107, the authors assume l_t (defined in Eq. 1) is a quadratic function. First of all, this is a strong assumption as it does not hold globally unless the loss functions L are chosen to be the squared loss. For nonconvex functions, it may be true locally, in which case lines 107-108 should use a different notation for the univariate quadratic approximation, rather than overloading l_t. The next few points are regarding the theoretical analysis presented in section 4.4. 5. Assumption 2 is way too strong for all step sizes and an arbitrary loss function, especially when the focus of this paper is deep learning objectives. It is also only justified locally throughout the paper by plotting the univariate function on sampled iterates. 6. Lemma 1, which says if every slice of a multivariate function is univariate quadratic, then the overall function is also quadratic. This is rather trivial, and it would be much cleaner to simply say that the theoretical analysis is based on the squared loss. 7. Proposition 2 assumes that the components in the finite-sum objective are all quadratic functions, and that the positive definite matrices defining them are all the same. Not only is it trivial that in this case, the minimizer of the overall objective and the minimizers of the individual objectives coincide, it also implies that for a typical machine learning application all the features are the same, but the labels may be different.
NIPS
Title A sharp NMF result with applications in network modeling Abstract Given an n× n non-negative rank-K matrix Ω where m eigenvalues are negative, when can we write Ω = ZPZ ′ for non-negative matrices Z ∈ R and P ∈ R? While most existing works focused on the case of m = 0, our primary interest is on the case of general m. With new proof ideas, we present sharp results on when the NMF problem is solvable, which significantly extend existing results on this topic. The NMF problem is partially motivated by applications in network modeling. For a network with K communities, rank-K models are especially popular. The Degree-Corrected Mixed-Membership (DCMM) model is a recent rank-K model which is especially useful and interpretable in practice. To enjoy such properties, it is of interest to study when a rank-K model can be rewritten as a DCMM model. Using our NMF results, we show that for a rank-K model in the most interesting parameter ranges, we can always rewrite it as a DCMM model. 1 Introduction Fix (n,K,m) where n ≥ K ≥ 2 and 0 ≤ m ≤ K − 1. We are interested in the following Non-negative Matrix Factorization (NMF) problem. The NMF problem: given an n× n symmetric non-negative irreducible matrix Ω with rank K where exactly m of the K nonzero eigenvalues are negative, when can we find non-negative matrices Z ∈ Rn,K and P ∈ RK,K such that Ω = ZPZ ′? (1.1) Definition 1.1 We say a matrix Ω non-negative if all of its entries are non-negative, and we say it positive if all of its entries are (strictly) positive. We say the NMF problem is solvable for Ω is we can find non-negative matrices (Z,P ) as above such that Ω = ZPZ ′. We assume K ≥ 2 for the case of K = 1 is trivial, and we assume m ≤ K − 1 for an irreducible non-negative matrix has at least one positive eigenvalue (e.g., by Perron’s theorem [9]). NMF is a fundamental problem and has applications in areas such as image processing [5, 23], text learning [21], hyper-spectral unmixing, and social network analysis [13]. Our setting is a special case of NMF where both Ω and P are symmetric, so we may call it symmetric NMF. In the literature, symmetric NMF was widely used in clustering of nonlinearly separable data from a similarity matrix [7], where for a non-negative symmetric matrix Ω, it aims to find a non-negative matrix Z such that Ω = ZZ ′, where Z ∈ Rn,N and N ≥ K. (1.2) Note that, first, this implicitly requires that Ω is positive semi-definite. Second, it is understood that for many non-negative and positive semi-definite matrices Ω, the smallest N we can find in the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). factorization of (1.2) is strictly larger than K (the rank of Ω). See the 2021 book by Shaked-Monderer and Berman [26]. The book is 551 pages and summarizes nicely most existing results on NMF. Unfortunately, our setting in (1.1) is significantly different from that in (1.2), so existing results on NMF do not directly apply. Especially, our NMF setting is motivated applications of social network modeling, where we must (a) allow Ω to have negative eigenvalues, (b) require that Z has exactly K columns (K = rank(Ω)), and (c) have a factorization of Ω = ZPZ ′ instead of Ω = ZZ ′ (we will soon see that both (P,Z) have practical meanings in our setting). Below, in Section 1.1, we introduce several recent network models. In Section 1.2, we explain why the NMF problem (1.1) is important and relevant in social network modeling. 1.1 Several recent rank-K network models, and especially the DCMM model Consider a symmetric connected network with n nodes and let A be the adjacency matrix, where A(i, j) = 1 if there is an edge connecting nodes i and j and A(i, j) = 0 otherwise. As a convention, we do not allow self edges, so all diagonal entries of A are 0. We assume the network has K perceivable communities (communities are scientifically meaningful but mathematically hard to define; intuitively, they are clusters of nodes that have more edges “within" than “across" [12, 30]): C1, C2, . . . , CK . In many network models, we assume that the upper triangular entries of A are independent Bernoulli random variables, and that there is an n× n non-negative matrix Ω such that Ω(i, j) = P(A(i, j) = 1) for all 1 ≤ i 6= j ≤ n. Let diag(Ω) ∈ Rn,n be the diagonal matrix where the i-th diagonal entry is Ω(i, i) and let W ∈ Rn,n be the matrix where W (i, j) = A(i, j)− Ω(i, j) if i 6= j and W (i, j) = 0 otherwise. The matrix W is known as the generalized Wigner matrix. With these notations, A = Ω− diag(Ω) +W. (1.3) We call Ω the Bernoulli probability matrix. Frequently, we assume a rank-K model for Ω: Ω is an irreducible non-negative matrix where the rank is K. (1.4) Note that K is the number of communities and has important practical meanings. Also, irreducibility is a natural assumption as we assume the network is connected (otherwise, we can study each connected component of the network separately). Below are some examples of rank-K models. Example 1 (RDPG Model). In a Random Dot Product Graph (RDPG) model [28], we fix a Kdimensional distribution F , generate yi iid∼ F , and let Ω(i, j) = (yi, yj) (inner product), 1 ≤ i, j ≤ n. If we write Y = [y1, y2, . . . , yn]′ (which is an n ×K matrix), then Ω = Y Y ′. The model is wellknown in network and graph modeling. However, a noteworthy issue is that, the matrix Ω defined in this way is always positive semi-definite. This makes the model relatively restrictive (e.g., [25]). Example 2 (GRDPG Model). To address the issue above, Rubin-Delanchy et al [25] proposed the generalized RDPG (GRDPG). Fix K and 0 ≤ m < K. Let JK,m = diag(1, 1, . . . ,−1, . . . ,−1) be the K ×K diagonal matrix where the first (K −m) diagonal entries are 1 and the remaining diagonal entries are −1. With a similar Y matrix as in RDPG, GRDPG assumes Ω = Y JK,mY ′. An Ω defined in this way has negative eigenvalues, but we have to choose (Y, JK,m) carefully to make sure that Ω is non-negative; this problem is not immediately clear. Example 3. It was argued (e.g., [4]) that the Bernoulli probability matrix Ω in a graphon model can be well-approximated by a low-rank matrix provided with some regularity conditions. In all these examples above, the parameters do not have explicit practical meanings (at least not directly or not sufficiently), so in a real application example, it remains unclear how to interpret the estimates of these parameters. Therefore, it is desirable to have models where the parameters have more explicit meanings in practice and so are easier to interpret. The Degree-Corrected Mixed-Membership (DCMM) model is one of such models. Proposed by [15] (see also [29]), the model is motivated by the observation that natural networks usually have severe degree heterogeneity and mixed-memberships. To accommodate both features, for each node i, 1 ≤ i ≤ n, we use a (strictly positive) parameter θi to model the degree heterogeneity and a weight vector πi ∈ RK to model the memberships, where πi(k) = weight node i puts in Ck, 1 ≤ k ≤ K. We call node i pure if πi is degenerate (i.e., only one entry is nonzero) and mixed otherwise. We also model the community structure by a symmetric and non-negative matrix P ∈ RK,K : P (k, `) = baseline probability where a node in Ck and a node in C` have an edge, 1 ≤ k, ` ≤ K. DCMM assumes that for all 1 ≤ i, j ≤ n, Ω(i, j) = θiθjπ′iPπj . If we let θ = (θ1, . . . , θn)′, Π = [π1, . . . , πn] ′, and Θ be the n× n diagonal matrix where Θ(i, i) = θi, 1 ≤ i ≤ n, then we have Ω = ΘΠPΠ′Θ, (1.5) Conventionally, we assume rank(Π) = rank(P ) = K, so DCMM is also a rank-K model. Remark 1. The DCMM model can be viewed as the extension of several models, including the classical block model. In fact, (a) DCMM reduces to Degree-Corrected Block Model (DCBM) [20] if all nodes are pure, (b) DCMM reduces to the Mixed-Membership Stochastic Block Model (MMSBM) [1, 2, 24] if all θi are equal, and (c) DCMM reduces to the classical Stochastic Block Model (SBM) [8] if all nodes are pure and all θi are equal (as above, node i is pure if πi is degenerate). 1.2 When is a rank-K network model also a DCMM model? A DCMM model is a rank-K model, but compared to other rank-K models, all parameter matrices (Θ,Π, P ) in the DCMM model have practical meanings and are easy to interpret. These make the DCMM model especially appealing in practice, and motivate the following problem: When is a rank-K network model also a DCMM model? (1.6) To explain why this is important, we use the dynamic co-citation networks in [11] (see also [10]) as an example. The paper presented 21 co-citation networks for the same set of nodes (i.e., authors) in statistics, each for a different time window. We are interested in (a) how many research areas in statistics, (b) what are baseline citation exchanges between different research areas, and (c) how the research interests of individual authors evolve over time. Here, a co-citation network is a symmetrized citation network where each node is an author, and two nodes have an edge if they have been co-cited for at least N times (for an N they picked) in the corresponding time window. The paper suggested that there are 3 primary research areas in statistics (which was interpreted as “Bayes", “Biostatistics", and “Non-parametric") and a handful of sub-areas, and that it is convenient to model each co-citation network by a DCMM model with K = 3. In detail, for each author i and time window t, 1 ≤ i ≤ n, 1 ≤ t ≤ T , they used a K ×K matrix P (t) to model the baseline citation exchanges between the primary research areas, a positive number θit to model the relative influence (in citations) of author i, and a weight vector πit to model the research interest of author i. If we similarly let Θ(t) = diag(θ1t, θ2t, . . . , θnt) and Π(t) = [π1t, π2t, . . . , πnt]′, then the Bernoulli probability matrix of the DCMM model at time t is Ω(t) = Θ(t)Π(t)P (t)(Π(t))′Θ(t). Using the DCMM model, they discovered a research triangle of statisticians (reminiscent of Efron’s triangle for statistical philosophy [6]), and used it to visualize the trajectories of research interests of a handful of individual authors. Imagine that, if we use a different rank-K model (e.g., GRDPG) to model these networks, say, with Ω(t) = Y (t)J (t)(Y (t))′ for some matrices (Y (t), J (t)). It is unclear how to relate Y (t) to baseline citation exchanges, research interests and relative influence of individual authors. This explains why (1.6) is of interest: given a rank-K network model, we wish to know when we can rewrite it as DCMM model, and so we can enjoy the properties and interpretability of the DCMM model. We now come back to (1.6). Seemingly, NMF is to key to answer this question. Consider a positive matrix Ω with rank K and suppose that it has an NMF as in (1.1) for two non-negative matrices Z ∈ Rn,K and P ∈ RK,K : Ω = ZPZ ′. Write Z = [z1, z2, . . . , zn]′ so z′i is the ith row. Without loss of generality, assume all zi are nonzero vectors. Let Θ(i, i) = ‖zi‖1 and πi = zi/‖zi‖1, 1 ≤ i ≤ n. It is seen that Θ(i, i) > 0, that each πi is a weight vector, and that Ω = ZPZ ′ = ΘΠPΠ′Θ. Therefore, we can always rewrite a rank-K model as a DCMM model if Ω has an NMF as in (1.1). This explains our motivation underline the NMF problem (1.1). Note that to answer the question in (1.1), a study on the NMF problem in (1.2) would be not be relevant. For example, in a DCMM model, K is the number of communities, so an NMF in (1.2) with an N > K would not be useful. For this reason, we have to focus on the NMF problem in (1.1). 1.3 Results and contributions Write Ω = Y JK,mY ′ as in Example 2, where JK,m = diag(1, . . . , 1,−1, . . . ,−1) is a K × K diagonal matrix and Y = [y1, y2, . . . , yn]′ ∈ Rn,K . Let λk be the k-th eigenvalue of Ω and let ξk be the corresponding eigenvector. For 1 ≤ i ≤ n, define ri ∈ RK−1 by ri(k) = ξk+1(i)/ξ1(i), 1 ≤ k ≤ K − 1. For any unit-norm vector y0 ∈ RK , let c(y0) = max{1≤i≤n}{|(yi, y0)|/‖yi‖}. In Section 2, we show that the NMF problem for Ω is solvable if m ≤ K/2 and c(y0) ≥ √ 1− 1/K for some y0; let us call this the main condition. We show that, in order for the NMF problem to be solvable, the constant √ 1− 1/K can not be further reduced. Therefore, in this sense, our results are sharp. Using this, we deduce several other results. Especially, we show that the NMF problem is solvable for Ω if ∑K−1 k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. We also extend our results to the case of m > K/2, and explain why we need a different proof in this case. In Section 3, we apply our results on NMF to network modeling. We argue that for parameters in the most interesting range, we have (A) all ‖ri‖ are bounded, and (B) max2≤k≤K{|λk/λ1|} → 0, and so the condition just mentioned holds. This implies that we can alway rewrite a rank-K network model as a DCMM model if the parameters are in the most interesting range. We also discuss how to check the main condition in practice where Ω is unknown. We tackle this by proposing an approach to estimating Ω, and support our results by some real networks. Our contributions are two fold. First, we develop several new results on symmetric NMF (a problem of interest in many applications [5]). Existing works on symmetric NMF have been focused on the case of m = 0 (so Ω is positive semi-definite; m is the number of negative eigenvalues of Ω). In this case, the best result is seen to be [26, Theorem 3.137], which can be viewed as a special case of our results; see Remark 2. This suggests that our results are sharp, for they are hard to improve even in the special case of m = 0. Note that our case allows m to take any possible values, so it is clearly harder to study. For example, to show the results for the case of m = 0, it suffices if we can find a K ×K orthogonal matrix Q such that Y Q′ is non-negative, since JK,m is the identity matrix in this case. For our case, we must find a Q such that Y Q′ and QJK,mQ′ are simultaneously non-negative. Clearly, this requires new ideas. We tackle this by constructing a special class of matrices Q; see our proofs for details. Our approach is quite different from that of [26, Theorem 3.137] and is new. Second, we shed interesting new light on different rank-K network models. In the literature, it is not unusual that many similar models are proposed for the same type of data sets. But in the end, we need to understand the advantages and disadvantages of different models, and pick the most suitable one. Our study recommends DCMM model, for it offers desired practical interpretability which other rank-K models do not have, and points out that a general rank-K model is also a DCMM model if the parameters are in the most interesting range. Such findings are valuable for they can help us identify the most suitable models in real applications. Notations. We denote e1, e2, . . . , eK by the standard basis vectors of K-dimensional Euclidean space and e0 = K−1/2(e1 + e2 + . . .+ eK). For any q > 0 and vector x, ‖x‖q denotes the `q-norm (when q = 2, we drop the subscript and write ‖x‖). For any two vectors x and y of the same dimension, (x, y) denotes the inner product. For a vector a ∈ Rn, diag(a) denotes the n×n diagonal matrix where the i-th diagonal entry is ai, 1 ≤ i ≤ n. When Ω is an n× n matrix, diag(Ω) denotes the n× n diagonal matrix where the i-th entry is Ω(i, i), 1 ≤ i ≤ n. 2 Main results on NMF This section presents our results on NMF. Results on network modeling are in Section 3. Consider an n× n irreducible non-negative matrix Ω with rank K, where n is usually much larger than K. By Perron’s theorem [9], at least one eigenvalue of Ω is positive. Fix 0 ≤ m ≤ K − 1 and suppose Ω has m negative eigenvalues. Let JK,m = diag(1, . . . , 1,−1, . . . ,−1) be the K ×K diagonal matrix as in Example 2. By basic algebra, we can always write Ω = Y JK,mY ′, for a full rank matrix Y ∈ Rn,K . (2.7) We can also show (e.g., an exercise with the Weyl’s theorem [9]) that for any matrix as in (2.7), the numbers of positive and negative eigenvalues are (K −m) and m, respectively. Write Y = [y1, y2, . . . , yn] ′, so that y′i is row i of Y , 1 ≤ i ≤ n. (2.8) Define the subset of K-dimensional vectors that live on the unit-sphere where the last m entries are 0: SK,m = {x = (x1, . . . , xK)′ ∈ RK , ‖x‖ = 1, xK−m+1 = . . . = xK = 0}. When m = 0, Sm is the unit sphere of RK . The following theorem is proved in the supplement. Theorem 2.1 Fix K ≥ 2, n ≥ K, and 0 ≤ m ≤ K/2. Consider the NMF problem (1.1) where Ω = Y JK,mY ′ and Y are as in (2.7). Suppose there is a vector y0 ∈ SK,m such that |(y0, yi)|/‖yi‖ ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.9) There exists a K ×K orthogonal matrix Q such that both Y Q′ and QJK,mQ′ are non-negative. As a result, the NMF problem for Ω is solvable: Ω = ZPZ ′ with Z = Y Q′ and P = QJK,mQ′. We have several comments. First, Theorem 2.1 assumes two conditions: m ≤ K/2 and (2.9). When K ≤ 2, both conditions hold automatically, so the NMF problem is always solvable in this case; see Section 2.1. As far as we know, our proof is different from existing approaches. Second, in Theorem 2.1, we require y0 ∈ Sm. This may seem restrictive, but is not. This is because y0 is a vector we choose for our own convenience. In fact, one of the most interesting settings for NMF seems to be that in Section 2.3, where we choose y0 = (1, 0, . . . , 0)′, so the requirement is satisfied automatically. Also, when the last m entries of y0 are nonzero but sufficiently small, Theorem 2.1 continues to hold if we modify the term √ 1− 1/K slightly. Third, from a practical view point, the condition of m ≤ K/2 is mild: we rarely see a rank-K network model with m > K/2 (note here m can be estimated using the eigenvalues of the adjacency matrix A). For theoretical completeness, the case of m > K/2 is also interesting, but there does not exist an orthogonal matrix Q such that QJK,mQ′ is non-negative. This is because for any such Q, trace(QJK,mQ′) = K − 2m < 0. Therefore, we must find a different way to solve the NMF problem in this case. We discuss this in Section 2.4. Last, an interesting question is whether our idea is extendable to asymmetric NMF or complex NMF [19]. As a simple extension to asymmetric NMF, consider an n× p positive matrix Ω of rank-K. By SVD, Ω = Y Z ′ for an n×K matrix Y and a p×K matrix Z. Let y′i be i-th row of Y and z′j be the j-th row of Z, respectively. If there is a y0 ∈ SK,m such that for all i and j, |(yi, y0)|/‖yi‖ ≥ √ 1− 1/K and |(zj , y0)|/‖zj‖ ≥ √ 1− 1/K, then we can find a K ×K orthogonal matrix Q which rotates all rows of Y and Z to the first orthant simultaneously. In this case, the asymmetric NMF problem is solvable for Ω. For reasons of space, we leave further study along this line to the future. Our result is sharp for the constant √ 1− 1/K in (2.9) can not be further reduced. While we can show this for general K, we illustrate with the case of K = 2 for instruction purpose. In this case, we can rotate n unit-norm vectors y1, y2, . . . yn in R2 simultaneously to the first orthant if and only if there is a unit-norm vector y0 such that |(y0, yi)| ≥ √ 1− 1/2 (i.e., the angle between them is ≤ π/4) for all 1 ≤ i ≤ n. See Section 2.1 and Remark 3 for more discussion. Another way to see the sharpness is to consider the case of m = 0 (so Ω is positive semi-definite). In this case, condition (2.9) is hard to improve and is the weakest we have so far in the literature; see Remark 2. 2.1 The case of K = 2 In this case, the NMF problem is always solvable, as the two conditions of Theorem 2.1, m ≤ K/2 and (2.9), hold automatically. In fact, first, since Ω has at least one positive eigenvalues and K = 2, we have either m = 0 or m = 1, and so m ≤ K/2. Second, we can always find a y0 ∈ Sm such that (2.9) is satisfied. In detail, let 0 ≤ θi < 2π be the angle from e1 (e1 = (1, 0)) to yi counterclockwise, and let θmin and θmax be the smallest and largest values of all θi. Now, when m = 0, let y0 be the unit vector where the angle from e1 to y0 is (θmax + θmin)/2, counterclockwise. When m = 1, take y0 = (1, 0). The following theorem is proved in the supplement. Theorem 2.2 Fix K = 2, 0 ≤ m ≤ K − 1, n ≥ K, and let y0 be as above. In this case, m ≤ K/2 and (2.9) holds for the y0 above, so the NMF problem is always solvable for Ω. 2.2 When y0 is a scaled weighted average of yi’s For the y0 in (2.9), an interesting choice is to let it be proportional to a weighted average of yi’s. Call w ∈ Rn a weight vector if all of its entries are non-negative with a sum of 1. Recall that Ω = Y JK,mY ′. Define a proxy of Ω by Ω̃ = Y Y ′. Note that Ω̃ = Ω if m = 0. Introduce y(w) ∈ RK and β(w) ∈ Rn by y(w) = ∑n i=1 wiyi = Y ′w and β(w) = Ω̃w. Since Y is full rank, y(w) 6= 0. Take y0 = y(w)/‖y(w)‖. Condition (2.9) reduces to |β(w)i |/ √ Ω̃(i, i)(w′Ω̃w) ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.10) Theorem 2.3 Fix K ≥ 3, 0 ≤ m ≤ K/2, and n ≥ K. The NMF problem (1.1) is solvable for Ω if the last m entries of y(w) are 0 and (2.10) holds. Theorem 2.3 follows from Theorem 2.1 by direct calculations, so the proof is omitted. We require that the last m entries of y(w) are 0, for we need y0 ∈ Sm in Theorem 2.1. As explained before, this may seem restrictive, but it is not, as in the most interesting case to be discussed in Section 2.3, we take y(w) = (1, 0, . . . , 0), so the requirement is satisfied automatically. See details therein. When m = 0, Ω̃ = Ω, and β(w) = Ωw. In this case, condition (2.10) reduces to |β(w)i |/ √ Ω(i, i)(w′Ωw) ≥ √ 1− 1/K. (2.11) We have the following corollary, the proof of which is straightforwards so is omitted. Corollary 2.1 Fix n ≥ K ≥ 3. The NMF problem (1.1) is solvable for Ω if m = 0 and (2.11) holds. Remark 2. If we take w = n−11n, then (2.10) reduces to |βi|/ √ Ω(i, i)(1′nΩ1n) ≥ √ 1− 1/K with β = Ω1n, and Corollary 2.1 reduces to [26, Theorem 3.137], where m = 0 and Ω is positive semi-definite. Our setting is more general as Ω may have m negative eigenvalues for any m ≤ K/2. For the case of m = 0, [26, Theorem 3.137] (see also [27]) is by far the best results we can have. The book [26] presents several other results on this topic, but they need some conditions which are less intuitive or harder to check. Recall that the constant √ 1− 1/K in (2.9) can not be further reduced. These suggest that Theorem 2.1 is hard to improve and our results are sharp. Remark 3. (When can we rotate n vectors to the first orthant?) As a stylized application, consider the following problem. Let x1, x2, . . . , xn be n unit-norm vectors in RK , n ≥ K, and let αK(x1, x2, . . . , xn) = min1≤i,j≤n{(xi, xj)}. For what values of αK(x1, x2, . . . , xn) can we rotate all n points simultaneously to the first orthant? Let X = [x1, x2, . . . , xn]′ and assume X is full rank without loss of generality. The matrix Ω = XX ′ is symmetric and positive semi-definite. Let α∗K = 0 if K = 2 and α ∗ K = √ 1− 1/K if K ≥ 3. Applying Theorem 2.1 with m = 0, it follows that when αK(x1, x2, . . . , xn) ≥ α∗K , we can rotate all n points to the first orthant. Note that we can not do so if αK(x1, x2, . . . , xn) < 0. 2.3 When Y is constructed by the spectral decomposition of Ω So far, we have tried to keep our results as general as we can, and Y can be any matrix satisfying Ω = Y JK,mY ′. An interesting special case is when Y is constructed using the spectral decomposition of Ω, which we now discuss. For 1 ≤ k ≤ K, let λk be the k-th largest eigenvalue of Ω, and let ξk be the corresponding (unit-norm) eigenvector. In the literature λ1 and ξ1 are called the Perron root and Perron vector, respectively, where we can always assume all entries of ξ1 are positive since Ω is irreducible and non-negative (e.g., [26]). Write Ξ = [ξ1, ξ2, . . . , ξK ] and define the n× (K − 1) so-called matrix of entry-wise ratio R by R(i, k) = ξk+1(i)/ξ1(k), 1 ≤ k ≤ K − 1, 1 ≤ i ≤ n [12, 16]. Introduce D = diag(|λ1|, |λ2|, . . . , |λK |), D0 = diag(|λ2|, . . . , |λK |), (2.12) and write R = [r1, r2, . . . , rn] ′, Y = ΞD1/2 = [y1, y2, . . . , yn] ′. (2.13) By spectral decomposition, Ω = ΞD1/2JK,mD1/2Ξ′ = Y JK,mY ′. Now, in Section 2.2, if we take w = cξ1 where c = 1/‖ξ1‖1, then by basic algebra and definitions, it is seen y(w) = c √ λ1e1 and so y0 = e1 and especially y0 ∈ Sm. Moreover, β(w)i = cλ1ξ1(i), w′Ω̃w = c2λ1, and Ω̃(i, i) = y′iDyi. Combining these, condition (2.10) reduces to r′iD0ri ≡ K−1∑ k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1), for all 1 ≤ i ≤ n. (2.14) The following theorem is proved in the supplement. Theorem 2.4 Fix K ≥ 3, m ≤ K2 , and n ≥ K. The NMF problem (1.1) is solvable if (2.14) holds. Note that as in most works on NMF (e.g., [26]), the main goal is to find easy-to-check conditions under which the NMF is solvable. Such conditions are sufficient but are not necessary. 2.4 The case of m > K/2 So far, we have been focused on the case of m ≤ K/2, which is the case that is most frequently found in real networks. For completeness, we now consider the case where m > K/2. Since 0 ≤ m ≤ K−1, such a case only exists whenK ≥ 3. In Theorem 2.1, we show that whenm ≤ K/2, we can find an orthogonal matrixQ such thatQJK,mQ′ is non-negative. Whenm > K/2, we can not do this, as for any such Q, trace(QJK,mQ′) = (K − 2m) < 0. Therefore, we need a new approach. A convenient approach is to redefine JK,m where we select a subset of the positive diagonal entries of JK,m and add a positive number for each of them. Success has been shown in a related setting (e.g., [3]). Using such a trick, we can extend all our main results to the case of m > K/2. For reasons of space, we only consider an extension of Theorem 2.4, as the claim of the theorem is probably the most explicit. Also for reasons of space, we only consider the case where we add a number to the first diagonal entry of JK,m. Note that the idea is readily extendable to more general cases. Let Q be the set of all orthogonal matrices where the first column is K−1/2(1, 1, . . . , 1)′. Fix 1 ≤ m ≤ K − 1. For any Q ∈ Q, write Q = [Q(K−m), Q(m)], where Q(K−m) and Q(m) are the sub-matrix of Q consisting the first (K −m) columns and the other m columns, respectively. Introduce a constant by am = 1+K infQ∈Qmax1≤i,j≤K{H(i, j) : H = 2Q(m)(Q(m))′−IK}(IK : K ×K identity matrix). Theorem 2.5 extends Theorem 2.4 and is proved in the supplement. Theorem 2.5 Fix K ≥ 3, 0 ≤ m ≤ (K − 1), and n ≥ K. We have am = 1 if m ≤ K/2 and am = (K − 1) if m = K − 1. Also, the NMF problem is solvable for Ω if r′iD0ri ≡∑K−1 k=1 |λk+1|r2i (k) ≤ 1/[am(K − 1)] for all 1 ≤ i ≤ n. When m ≤ K/2, am = 1. In this case, the claim here is the same as that in Theorem 2.4. Remark 4. When the NMF problem for Ω is solvable, the solution is usually not unique without a proper regularity condition (e.g., [5]). In our setting, once we can write Ω = ΘΠPΠ′Θ for some non-negative matrices (Θ,Π, P ) as in (1.5), the factorization is unique if (a) for each 1 ≤ k ≤ K, there is at least one i such that πj = ek, where ek is the k-th standard Euclidean basis vector of RK , and (b) all diagonal entries of P are 1 (see [15, 16] for a proof). Remark 5. When condition (2.9) of Theorem 2.1 holds for some vectors y0, how to find such a y0 and the orthogonal matrix Q in Theorem 2.1 numerically? This is an interesting question and we discuss it in Section F of the supplement. 3 When is a rank-K network model also a DCMM model So far, we focus on general NMF settings where we show that the NMF problem (1.1) is solvable when, for example, (2.14) holds. We now apply the results to networks and study when we can rewrite a rank-K network model as a DCMM model. Network analysis (e.g., community detection, membership estimation, link prediction) is a well-studied area, where we have a lot of knowledge on what is the regime of major interest and what conditions are reasonable [16, 15, 18, 29]. In fact, in network analysis, we usually use an asymptotic framework where n→∞, K is fixed, and other parameters may vary with n, where it is quite acceptable to assume (A) all ‖ri‖ are bounded and (B) max2≤k≤K{|λk/λ1|} → 0; the notations are the same as those in Theorem 2.4. In fact, (A)-(B) model the most interesting regime in network analysis. In Theorem 2.4, the main condition (e.g., (2.14)) is r′iD0ri ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. Once (A)-(B) hold, (1/|λ1|)D0 → 0 and (2.14) holds, so we can always rewrite a rank-K network model as a DCMM model when (A)-(B) hold. The remaining question is then, why (A)-(B) are reasonable assumptions in network analysis, and why they model the most interesting regime in network analysis. We now explain these in details. Let Ω be the Bernoulli probability matrix as in (1.3). Suppose Ω = Y PY ′, where Y ∈ Rn,K and is full rank, P ∈ RK,K , and (Y, P ) are not necessarily non-negative. Denote G = Y ′Y . Note that G is a K ×K symmetric and positive definite matrix. Let G1/2 be the (unique) square root of Y ′Y . We usually assume Y is balanced in that (a) the `2-norm of all K columns are in the same order, and (b) no severe linearity between the K columns [15, 18]. As a result, all eigenvalues of G are at the same order. By basic algebra, there is a K×K orthogonal matrix Q such that Ξ = [ξ1, ξ2, . . . , ξK ] = Y B, where B = G−1/2Q. Write B = [b1, b2, . . . , bK ] and let 0 ≤ αi < 2π be the angle between b1 and yi (counterclockwise). Let M(Ω) = max{1≤i≤n}{1/| cos(αi)|} and define matrix V ∈ RK,K−1 by V (i, k) = bk+1(i)/b1(i), 1 ≤ i ≤ K, 1 ≤ k ≤ K − 1. (3.15) Write V = [v1, v2, . . . , vK ]′ so v′k is row-k of V , 1 ≤ k ≤ K. For any symmetrical matrix P , λk(P ) denotes the k-th largest eigenvalue; to be consistent with earlier notations, we simply write λk(Ω) as λk. Lemma 3.1 is proved in the supplement. Lemma 3.1 We have B = diag(b1)[1K , V ], P = Bdiag(λ1, . . . , λK)B′, b1 is an eigenvector of PG, and P (k, k) = b21(k)[λ1 + v ′ kdiag(λ2, . . . , λK)vk], 1 ≤ k ≤ K. Moreover, if as n → ∞, λ1(G) ≤ c0λK(G) for a constant c0 > 0, then condition (B) holds if and only if max2≤k≤K{|λk(P )/λ1(P )|} → 0, and max{1≤i≤n}{‖ri‖} ≤ CM(Ω). It is seen that conditions (A)-(B) hold if M(Ω) ≤ C and max2≤k≤K{|λk(P )/λ1(P )|} → 0. The first one is mild: it only requires that no yi is nearly orthogonal to b1. To boil these conditions down to a more explicit and vivid form, we consider the DCMM model. It is fine to consider the DCMM model here for (a) we only use the model to explain why conditions (A)-(B) are reasonable, and (b) the argument below is extendable beyond the DCMM model. In the DCMM model, Ω = ΘΠPΠ′Θ. Therefore, we can write Ω = Y PY ′ if we let Y = ΘΠ, where we note that (Y, P ) are non-negative. Recall that G = Y ′Y (a positive definite K ×K matrix). Lemma 3.2 is proved in the supplement. Lemma 3.2 If (Y, P ) are non-negative, then first, PG is an irreducible non-negative matrix and b1 is the Perron vector, so all entries of b1 are strictly positive. Second, all rows of ri lives with a simplex with v1, v2, . . . , vK being the vertices, so max{1≤i≤n}{‖ri‖} ≤ max{1≤k≤K}{‖vk‖}. Last, if λ1(G) ≤ c0λK(G), then max{1≤i≤n}{‖ri‖} ≤ CM(Ω) ≤ C max1≤k≤K{‖b1‖/b1(k)}. Now, first, in a DCMM model, the matrix P (k, `) measures the baseline probability where there is an edge between a node in community k and a node in community `. Therefore, the most difficult or most interesting case is where all P (k, `) have similar values. In this case, P is close to rank-1, or in other words, max2≤k≤K{|λk(P )/λ1(P )|} → 0, and so max2≤k≤K{|λk/λ1|} → 0. See for example [15, 18], where it was further pointed out that the most difficult case for network analysis is when max{2≤k≤K}{|λk|} ≤ Ln · √ λ1 for a multi-log(n) factor Ln. Therefore, condition (A) models the most difficult case of network analysis and so is of major interest. Moreover, by Lemma 3.2, max{1≤i≤n}{‖ri‖} ≤ C if all entries of b1 are at the same order. This is only a mild condition for b1 is the Perron vector of PG. Last, by Lemma 3.2, we also have max{1≤i≤n}{‖ri‖} ≤ C if we alternatively assume max{1≤k≤K}{‖vk‖} ≤ C. Recall that B = G−1/2Q = [b1, b2, . . . , bK ] and v′1, v ′ 2, . . . , v ′ K are rows of V formed by dividing b2, b3, . . . , bK by b1 entry-wise, where b1 is the Perron vector. Since G is positive definite where all eigenvalues are at the same order, Q is orthogonal, and V is properly scaled (and all of them have small-sizes), it is only a mild condition to assume max{1≤k≤K}{‖vk‖} ≤ C. These explain why conditions (A)-(B) are mild condition and they model the most challenging regime for network analysis. 4 Real data examples, and especially how to check condition (2.14) Let ai = (1/|λ1|)r′iD0ri, 1 ≤ i ≤ n. Condition (2.14) can be rewritten as ai ≤ 1/(K − 1), for all 1 ≤ i ≤ n. In applications, Ω is unknown, so it is unclear how to obtain ai. A straightforward approach is to estimate ai with the eigenvalues and eigenvectors of the adjacency matrix A, but the estimates may be too noisy. We propose the following approach, which is inspired by Lemmas 3.1-3.2 and the recent Mixed-SCORE approach [16]. Let (Y, V ) be as above. Mixed-SCORE suggests an interesting idea for estimating V and (a normalized version of) Y , denoted by Π; see details therein. Let λ̂k be the k-th eigenvalue of A and let ξ̂k be the corresponding eigenvector. Write Ξ̂ = [ξ̂1, ξ̂2, . . . , ξ̂K ] = [ẑ1, ẑ2, . . . , ẑn] ′, so ẑ′i is row-i of Ξ̂. Our approach runs as follows. • Apply Mixed-SCORE and obtain an estimate (V̂ , Π̂) for (V,Π). Let v̂′k be row k or V̂ and let π̂′i be row i of Π̂, 1 ≤ k ≤ K, 1 ≤ i ≤ n. • Estimate b1 by b̂1 where b̂1(k) = [λ̂1 + ∑K k=2 λ̂kv̂ ′ kdiag(λ̂2, . . . , λ̂K)v̂k] −1/2. Let B̂ = diag(b̂1)[1K , V̂ ], and estimate P by P̂ = B̂diag(λ̂1, λ̂2, . . . , λ̂K)B̂′. Let ŷi = (‖zi‖1/‖B̂′π̂i‖1)π̂i, 1 ≤ i ≤ n, and let Ŷ = [ŷ1, ŷ2, . . . , ŷn]′. • Let µ̂k be the k-th eigenvalue of the matrix Ω̂ = Ŷ P̂ Ŷ ′, and let η̂k be the corresponding eigenvector. In the definition of ai (see above and (2.14)), replace (λk, ξk) by (µ̂k, η̂k) and denote the resultant quantity âi, 1 ≤ i ≤ n. These are our estimates for ai. The approach can be shown to be consistent for Ω under some regularity conditions. We skip the study for it is beyond the scope of this paper. In this algorithm, (Ŷ , P̂ ) are not automatically non-negative, and to check whether NMF is solvable for Ω̂, we can check if âi ≤ 1/(K − 1), for all 1 ≤ i ≤ n. (4.16) Remark 6. Condition (2.14) of Theorem 2.4 is only a sufficient condition for NMF; they are not necessary conditions. It could happen that an NMF is solvable for an Ω but (2.14) does not hold. We now consider some real examples. The weblog is a well-known data set [22], where with some light preprocessing, the network has 1, 222 node (each is a blog) and 16, 714 edges (each is a two-way hyperlink). The network has two communities: democratic and republican. For this data set, a rank-2 model is appropriate, so we have (n,K) = (1, 222, 2) (e.g., [30, 12, 18]). Let Ω be the Bernoulli probability matrix as in (1.3). By Theorem 2.2, when K = 2, we can always decompose Ω as Ω = Y PY ′ for a non-negative n × 2 matrix Y and a 2 × 2 non-negative matrix P . Now, by the paragraph right above Remark 1, we can rewrite Ω = ΘΠPΠΘ as in (1.3), so Ω satisfies a DCMM model. Same claim can be drawn for the karate data set [30, 12], where we similarly have K = 2. As another example, we consider the UKFaculty network (e.g., see [17, Table 1]). It is reasonable to model the network with a rank-K model with (n,K) = (81, 3) and m ≤ K/2. By Theorem 2.4, the model can be rewritten as a DCMM model if (4.16) holds. Following the discussion above, we first obtain an estimate Ω̂ for Ω. We then use Ω̂ to obtain âi and check if (4.16) holds. The results are in Figure 1 (left) below, where the maximum of â1, â2, . . . , ân is slightly smaller than 0.5 (1/(K − 1) = 0.5 as K = 3), suggesting that (4.16) holds. Moreover, let µ̂k be the k-th eigenvector of Ω̂ and let η̂k be the corresponding eigenvector. Let D̂ = diag(µ̂1, . . . , µ̂K) and Ŷ = [η̂1, . . . , η̂K ]D̂1/2. We have Ω̂ = Ŷ JK,mŶ ′. Let Q be the 3 × 3 matrix where the three rows are (1/ √ 3, 1/ √ 6, 1/ √ 2), (1/ √ 3, 1/ √ 6,−1/ √ 2), and (1/ √ 3,−2/ √ 6, 0), respectively. Define Ẑ = Ŷ Q′. It is seen Ω̂ = Ŷ JK,mŶ ′ = Ẑ[QJK,mQ′]Ẑ ′, where QJK,mQ′ is seen to be non-negative. Moreover, for 1 ≤ i ≤ n, let ẑi be the smallest entry in row-i of Ẑ. Figure 1 (right) plots the histogram for {ẑi}ni=1. The results suggest that all ẑi are non-negative, so the matrix Ŷ Q′ is non-negative. Therefore, Ω̂ has an NFM by Ω̂ = Ẑ[QJK,mQ′]Ẑ ′. These suggest that for the UKFaculty data set, (4.16) holds and it is reasonable to model the UKFaculty with a DCMM model. In summary, in many recent works on network analysis, we frequently assume that a DCMM model holds for the settings at hand, but we rarely checked if such an assumption is valid. Our NMF results provide an approach to checking whether the network satisfies DCMM model. 5 Discussion We derive a sharp NMF result and apply it to network modeling. Both NMF and network analysis are important areas in machine learning, with applications in image processing, social media, NLP, and cancer study [5, 23, 21]. In comparison, NMF is more theoretically oriented and network analysis is more application oriented. Our paper makes an interesting connection of the two areas. On one hand, we find a new application of NMF theory. This may open the door for a line of research where we find new applications of NMF in areas such as text learning [21] and tensor analysis [14]. On the other hand, we gain valuable insight on what are the most suitable network models in applications. This is crucial, for a suitable model is the starting point for methods and theory. Our study may help researchers identify the right network models and so can channel their strengths to the right direction. Our work may also help develop new methods. For example, compared to the general rank-K model, the DCMM model has more structures which we can exploit (see [16, 18] where they discovered a simplex structure in the spectral domain, using some specific features which the DCMM model has but a general rank-K model does not). Our approach is useful for it ensures us that in certain settings, we can use a more specific model and exploit the structures the model provides. Another point is that, existing NMF theory usually requires some crucial conditions. However, whether such conditions are reasonable in real applications remains unclear, especially when the conditions are on matrices that are not directly observable. In Section 3-4, we tackle this problem by providing (a) a detailed explanation for why our NMF assumptions are reasonable in network analysis and (b) new ideas for checking the NMF conditions in real applications when the NMF conditions are on matrices that are not directly observable. We hope our efforts many spark some new research along this line. Acknowledgements. The research was supported in part by NSF Grant DMS-2015469. The author would like to thank Naomi Shaked-Monderer, Helena Smigoc, and Changqing Xu for helpful pointers, and Zheng Tracy Ke and Jiajun Tang for very helpful comments.
1. What are the main contributions of the paper regarding the NMF problem and its application to network modeling? 2. What are the strengths and weaknesses of the paper, particularly in terms of originality, motivation, and clarity? 3. Do you have any questions or suggestions regarding the use of acronyms, notation, and figure labels? 4. How does the paper address the asymmetric case, directed stochastic block model, and complex NMF? 5. Why is the case m < K/2 considered the most interesting case in practice? 6. Are there any implications of the results concerning the ability to derive guarantees of SBM/community detection models in the very sparse regime? 7. Could the authors provide more insight into the converse of Theorem 2.4? 8. What type of insights can be obtained with the NMF-based approach that may not be possible otherwise?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper considers a particular case of the NMF problem, along with its application to network modelling. The main contributions of the paper is two-fold. On one hand, the authors put forth several new results on the symmetric case of the NMF problem, further generalizing previous lines of work that considered special cases of the current formulation from the paper. On the other hand, as a secondary contribution, the authors provide a both quantitative and qualitative discussion on use-cases of different rank-K network models. Strengths And Weaknesses The paper is fairly original in its inception, and provides a generation of previous results, with previous results being recovered from the current formulation. It provided a very good motivation for why one might care to consider an NMF approach to analyze network problems. The paper is quite dense, and a bit difficult to follow at places, also due to several omitted proofs and results called upon from other works, and not always with enough context. The authors could consider improving this aspect. Questions Perhaps not a good idea to use acronyms in the abstract without first explaining them - most readers of the abstract would not be familiar with DCMM, so a brief comment would be appropriate. What about the asymetric case? Can a brief comment be made on it, in light of the results in this paper? Could clarify what are “pure” nodes in Remark 1, line 82 - as it might not be clear to everyone outside of the SBM community. What about extension to a directed stochastic block model? (say, where the input adjacency matrix is a skew symmetric matrix). And somewhat related to this, what about the complex NMF? Why exactly is the case m < K/2 - “probably the most interesting case in practice”? The notation J_{K,m} is a bit cumbersome to follow at places, perhaps a notation can be used without the use of subscripts throughout, to improve readability. In Section 3, the authors repeatedly refer to “network analysis” applications as if it was a particular problem/task - but it should be made clear at the beginning what is the exact problem to be adressed. Are there any implications of the results concerning the ability to derive guarantees of SBM/community detection models in the very sparse regime? (one where the edge density in the graph is required to be above (log n) / n and extra effort is required for regularize appropriately before procedding with a spectral approach). The authors could comment more on the converse of Thm 2.4 when first mentioned. The Figure in page 9 should have a bare minimum to axis labeling and sub/captions. Section 4 could make it clear what type of insights one can obtain with the NMF-based approach which perhaps it cannot otherwise. Typos: 314: all rows of r_i lives with 357 (n,K) = (1,222, 2) - confusing at first Limitations none
NIPS
Title A sharp NMF result with applications in network modeling Abstract Given an n× n non-negative rank-K matrix Ω where m eigenvalues are negative, when can we write Ω = ZPZ ′ for non-negative matrices Z ∈ R and P ∈ R? While most existing works focused on the case of m = 0, our primary interest is on the case of general m. With new proof ideas, we present sharp results on when the NMF problem is solvable, which significantly extend existing results on this topic. The NMF problem is partially motivated by applications in network modeling. For a network with K communities, rank-K models are especially popular. The Degree-Corrected Mixed-Membership (DCMM) model is a recent rank-K model which is especially useful and interpretable in practice. To enjoy such properties, it is of interest to study when a rank-K model can be rewritten as a DCMM model. Using our NMF results, we show that for a rank-K model in the most interesting parameter ranges, we can always rewrite it as a DCMM model. 1 Introduction Fix (n,K,m) where n ≥ K ≥ 2 and 0 ≤ m ≤ K − 1. We are interested in the following Non-negative Matrix Factorization (NMF) problem. The NMF problem: given an n× n symmetric non-negative irreducible matrix Ω with rank K where exactly m of the K nonzero eigenvalues are negative, when can we find non-negative matrices Z ∈ Rn,K and P ∈ RK,K such that Ω = ZPZ ′? (1.1) Definition 1.1 We say a matrix Ω non-negative if all of its entries are non-negative, and we say it positive if all of its entries are (strictly) positive. We say the NMF problem is solvable for Ω is we can find non-negative matrices (Z,P ) as above such that Ω = ZPZ ′. We assume K ≥ 2 for the case of K = 1 is trivial, and we assume m ≤ K − 1 for an irreducible non-negative matrix has at least one positive eigenvalue (e.g., by Perron’s theorem [9]). NMF is a fundamental problem and has applications in areas such as image processing [5, 23], text learning [21], hyper-spectral unmixing, and social network analysis [13]. Our setting is a special case of NMF where both Ω and P are symmetric, so we may call it symmetric NMF. In the literature, symmetric NMF was widely used in clustering of nonlinearly separable data from a similarity matrix [7], where for a non-negative symmetric matrix Ω, it aims to find a non-negative matrix Z such that Ω = ZZ ′, where Z ∈ Rn,N and N ≥ K. (1.2) Note that, first, this implicitly requires that Ω is positive semi-definite. Second, it is understood that for many non-negative and positive semi-definite matrices Ω, the smallest N we can find in the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). factorization of (1.2) is strictly larger than K (the rank of Ω). See the 2021 book by Shaked-Monderer and Berman [26]. The book is 551 pages and summarizes nicely most existing results on NMF. Unfortunately, our setting in (1.1) is significantly different from that in (1.2), so existing results on NMF do not directly apply. Especially, our NMF setting is motivated applications of social network modeling, where we must (a) allow Ω to have negative eigenvalues, (b) require that Z has exactly K columns (K = rank(Ω)), and (c) have a factorization of Ω = ZPZ ′ instead of Ω = ZZ ′ (we will soon see that both (P,Z) have practical meanings in our setting). Below, in Section 1.1, we introduce several recent network models. In Section 1.2, we explain why the NMF problem (1.1) is important and relevant in social network modeling. 1.1 Several recent rank-K network models, and especially the DCMM model Consider a symmetric connected network with n nodes and let A be the adjacency matrix, where A(i, j) = 1 if there is an edge connecting nodes i and j and A(i, j) = 0 otherwise. As a convention, we do not allow self edges, so all diagonal entries of A are 0. We assume the network has K perceivable communities (communities are scientifically meaningful but mathematically hard to define; intuitively, they are clusters of nodes that have more edges “within" than “across" [12, 30]): C1, C2, . . . , CK . In many network models, we assume that the upper triangular entries of A are independent Bernoulli random variables, and that there is an n× n non-negative matrix Ω such that Ω(i, j) = P(A(i, j) = 1) for all 1 ≤ i 6= j ≤ n. Let diag(Ω) ∈ Rn,n be the diagonal matrix where the i-th diagonal entry is Ω(i, i) and let W ∈ Rn,n be the matrix where W (i, j) = A(i, j)− Ω(i, j) if i 6= j and W (i, j) = 0 otherwise. The matrix W is known as the generalized Wigner matrix. With these notations, A = Ω− diag(Ω) +W. (1.3) We call Ω the Bernoulli probability matrix. Frequently, we assume a rank-K model for Ω: Ω is an irreducible non-negative matrix where the rank is K. (1.4) Note that K is the number of communities and has important practical meanings. Also, irreducibility is a natural assumption as we assume the network is connected (otherwise, we can study each connected component of the network separately). Below are some examples of rank-K models. Example 1 (RDPG Model). In a Random Dot Product Graph (RDPG) model [28], we fix a Kdimensional distribution F , generate yi iid∼ F , and let Ω(i, j) = (yi, yj) (inner product), 1 ≤ i, j ≤ n. If we write Y = [y1, y2, . . . , yn]′ (which is an n ×K matrix), then Ω = Y Y ′. The model is wellknown in network and graph modeling. However, a noteworthy issue is that, the matrix Ω defined in this way is always positive semi-definite. This makes the model relatively restrictive (e.g., [25]). Example 2 (GRDPG Model). To address the issue above, Rubin-Delanchy et al [25] proposed the generalized RDPG (GRDPG). Fix K and 0 ≤ m < K. Let JK,m = diag(1, 1, . . . ,−1, . . . ,−1) be the K ×K diagonal matrix where the first (K −m) diagonal entries are 1 and the remaining diagonal entries are −1. With a similar Y matrix as in RDPG, GRDPG assumes Ω = Y JK,mY ′. An Ω defined in this way has negative eigenvalues, but we have to choose (Y, JK,m) carefully to make sure that Ω is non-negative; this problem is not immediately clear. Example 3. It was argued (e.g., [4]) that the Bernoulli probability matrix Ω in a graphon model can be well-approximated by a low-rank matrix provided with some regularity conditions. In all these examples above, the parameters do not have explicit practical meanings (at least not directly or not sufficiently), so in a real application example, it remains unclear how to interpret the estimates of these parameters. Therefore, it is desirable to have models where the parameters have more explicit meanings in practice and so are easier to interpret. The Degree-Corrected Mixed-Membership (DCMM) model is one of such models. Proposed by [15] (see also [29]), the model is motivated by the observation that natural networks usually have severe degree heterogeneity and mixed-memberships. To accommodate both features, for each node i, 1 ≤ i ≤ n, we use a (strictly positive) parameter θi to model the degree heterogeneity and a weight vector πi ∈ RK to model the memberships, where πi(k) = weight node i puts in Ck, 1 ≤ k ≤ K. We call node i pure if πi is degenerate (i.e., only one entry is nonzero) and mixed otherwise. We also model the community structure by a symmetric and non-negative matrix P ∈ RK,K : P (k, `) = baseline probability where a node in Ck and a node in C` have an edge, 1 ≤ k, ` ≤ K. DCMM assumes that for all 1 ≤ i, j ≤ n, Ω(i, j) = θiθjπ′iPπj . If we let θ = (θ1, . . . , θn)′, Π = [π1, . . . , πn] ′, and Θ be the n× n diagonal matrix where Θ(i, i) = θi, 1 ≤ i ≤ n, then we have Ω = ΘΠPΠ′Θ, (1.5) Conventionally, we assume rank(Π) = rank(P ) = K, so DCMM is also a rank-K model. Remark 1. The DCMM model can be viewed as the extension of several models, including the classical block model. In fact, (a) DCMM reduces to Degree-Corrected Block Model (DCBM) [20] if all nodes are pure, (b) DCMM reduces to the Mixed-Membership Stochastic Block Model (MMSBM) [1, 2, 24] if all θi are equal, and (c) DCMM reduces to the classical Stochastic Block Model (SBM) [8] if all nodes are pure and all θi are equal (as above, node i is pure if πi is degenerate). 1.2 When is a rank-K network model also a DCMM model? A DCMM model is a rank-K model, but compared to other rank-K models, all parameter matrices (Θ,Π, P ) in the DCMM model have practical meanings and are easy to interpret. These make the DCMM model especially appealing in practice, and motivate the following problem: When is a rank-K network model also a DCMM model? (1.6) To explain why this is important, we use the dynamic co-citation networks in [11] (see also [10]) as an example. The paper presented 21 co-citation networks for the same set of nodes (i.e., authors) in statistics, each for a different time window. We are interested in (a) how many research areas in statistics, (b) what are baseline citation exchanges between different research areas, and (c) how the research interests of individual authors evolve over time. Here, a co-citation network is a symmetrized citation network where each node is an author, and two nodes have an edge if they have been co-cited for at least N times (for an N they picked) in the corresponding time window. The paper suggested that there are 3 primary research areas in statistics (which was interpreted as “Bayes", “Biostatistics", and “Non-parametric") and a handful of sub-areas, and that it is convenient to model each co-citation network by a DCMM model with K = 3. In detail, for each author i and time window t, 1 ≤ i ≤ n, 1 ≤ t ≤ T , they used a K ×K matrix P (t) to model the baseline citation exchanges between the primary research areas, a positive number θit to model the relative influence (in citations) of author i, and a weight vector πit to model the research interest of author i. If we similarly let Θ(t) = diag(θ1t, θ2t, . . . , θnt) and Π(t) = [π1t, π2t, . . . , πnt]′, then the Bernoulli probability matrix of the DCMM model at time t is Ω(t) = Θ(t)Π(t)P (t)(Π(t))′Θ(t). Using the DCMM model, they discovered a research triangle of statisticians (reminiscent of Efron’s triangle for statistical philosophy [6]), and used it to visualize the trajectories of research interests of a handful of individual authors. Imagine that, if we use a different rank-K model (e.g., GRDPG) to model these networks, say, with Ω(t) = Y (t)J (t)(Y (t))′ for some matrices (Y (t), J (t)). It is unclear how to relate Y (t) to baseline citation exchanges, research interests and relative influence of individual authors. This explains why (1.6) is of interest: given a rank-K network model, we wish to know when we can rewrite it as DCMM model, and so we can enjoy the properties and interpretability of the DCMM model. We now come back to (1.6). Seemingly, NMF is to key to answer this question. Consider a positive matrix Ω with rank K and suppose that it has an NMF as in (1.1) for two non-negative matrices Z ∈ Rn,K and P ∈ RK,K : Ω = ZPZ ′. Write Z = [z1, z2, . . . , zn]′ so z′i is the ith row. Without loss of generality, assume all zi are nonzero vectors. Let Θ(i, i) = ‖zi‖1 and πi = zi/‖zi‖1, 1 ≤ i ≤ n. It is seen that Θ(i, i) > 0, that each πi is a weight vector, and that Ω = ZPZ ′ = ΘΠPΠ′Θ. Therefore, we can always rewrite a rank-K model as a DCMM model if Ω has an NMF as in (1.1). This explains our motivation underline the NMF problem (1.1). Note that to answer the question in (1.1), a study on the NMF problem in (1.2) would be not be relevant. For example, in a DCMM model, K is the number of communities, so an NMF in (1.2) with an N > K would not be useful. For this reason, we have to focus on the NMF problem in (1.1). 1.3 Results and contributions Write Ω = Y JK,mY ′ as in Example 2, where JK,m = diag(1, . . . , 1,−1, . . . ,−1) is a K × K diagonal matrix and Y = [y1, y2, . . . , yn]′ ∈ Rn,K . Let λk be the k-th eigenvalue of Ω and let ξk be the corresponding eigenvector. For 1 ≤ i ≤ n, define ri ∈ RK−1 by ri(k) = ξk+1(i)/ξ1(i), 1 ≤ k ≤ K − 1. For any unit-norm vector y0 ∈ RK , let c(y0) = max{1≤i≤n}{|(yi, y0)|/‖yi‖}. In Section 2, we show that the NMF problem for Ω is solvable if m ≤ K/2 and c(y0) ≥ √ 1− 1/K for some y0; let us call this the main condition. We show that, in order for the NMF problem to be solvable, the constant √ 1− 1/K can not be further reduced. Therefore, in this sense, our results are sharp. Using this, we deduce several other results. Especially, we show that the NMF problem is solvable for Ω if ∑K−1 k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. We also extend our results to the case of m > K/2, and explain why we need a different proof in this case. In Section 3, we apply our results on NMF to network modeling. We argue that for parameters in the most interesting range, we have (A) all ‖ri‖ are bounded, and (B) max2≤k≤K{|λk/λ1|} → 0, and so the condition just mentioned holds. This implies that we can alway rewrite a rank-K network model as a DCMM model if the parameters are in the most interesting range. We also discuss how to check the main condition in practice where Ω is unknown. We tackle this by proposing an approach to estimating Ω, and support our results by some real networks. Our contributions are two fold. First, we develop several new results on symmetric NMF (a problem of interest in many applications [5]). Existing works on symmetric NMF have been focused on the case of m = 0 (so Ω is positive semi-definite; m is the number of negative eigenvalues of Ω). In this case, the best result is seen to be [26, Theorem 3.137], which can be viewed as a special case of our results; see Remark 2. This suggests that our results are sharp, for they are hard to improve even in the special case of m = 0. Note that our case allows m to take any possible values, so it is clearly harder to study. For example, to show the results for the case of m = 0, it suffices if we can find a K ×K orthogonal matrix Q such that Y Q′ is non-negative, since JK,m is the identity matrix in this case. For our case, we must find a Q such that Y Q′ and QJK,mQ′ are simultaneously non-negative. Clearly, this requires new ideas. We tackle this by constructing a special class of matrices Q; see our proofs for details. Our approach is quite different from that of [26, Theorem 3.137] and is new. Second, we shed interesting new light on different rank-K network models. In the literature, it is not unusual that many similar models are proposed for the same type of data sets. But in the end, we need to understand the advantages and disadvantages of different models, and pick the most suitable one. Our study recommends DCMM model, for it offers desired practical interpretability which other rank-K models do not have, and points out that a general rank-K model is also a DCMM model if the parameters are in the most interesting range. Such findings are valuable for they can help us identify the most suitable models in real applications. Notations. We denote e1, e2, . . . , eK by the standard basis vectors of K-dimensional Euclidean space and e0 = K−1/2(e1 + e2 + . . .+ eK). For any q > 0 and vector x, ‖x‖q denotes the `q-norm (when q = 2, we drop the subscript and write ‖x‖). For any two vectors x and y of the same dimension, (x, y) denotes the inner product. For a vector a ∈ Rn, diag(a) denotes the n×n diagonal matrix where the i-th diagonal entry is ai, 1 ≤ i ≤ n. When Ω is an n× n matrix, diag(Ω) denotes the n× n diagonal matrix where the i-th entry is Ω(i, i), 1 ≤ i ≤ n. 2 Main results on NMF This section presents our results on NMF. Results on network modeling are in Section 3. Consider an n× n irreducible non-negative matrix Ω with rank K, where n is usually much larger than K. By Perron’s theorem [9], at least one eigenvalue of Ω is positive. Fix 0 ≤ m ≤ K − 1 and suppose Ω has m negative eigenvalues. Let JK,m = diag(1, . . . , 1,−1, . . . ,−1) be the K ×K diagonal matrix as in Example 2. By basic algebra, we can always write Ω = Y JK,mY ′, for a full rank matrix Y ∈ Rn,K . (2.7) We can also show (e.g., an exercise with the Weyl’s theorem [9]) that for any matrix as in (2.7), the numbers of positive and negative eigenvalues are (K −m) and m, respectively. Write Y = [y1, y2, . . . , yn] ′, so that y′i is row i of Y , 1 ≤ i ≤ n. (2.8) Define the subset of K-dimensional vectors that live on the unit-sphere where the last m entries are 0: SK,m = {x = (x1, . . . , xK)′ ∈ RK , ‖x‖ = 1, xK−m+1 = . . . = xK = 0}. When m = 0, Sm is the unit sphere of RK . The following theorem is proved in the supplement. Theorem 2.1 Fix K ≥ 2, n ≥ K, and 0 ≤ m ≤ K/2. Consider the NMF problem (1.1) where Ω = Y JK,mY ′ and Y are as in (2.7). Suppose there is a vector y0 ∈ SK,m such that |(y0, yi)|/‖yi‖ ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.9) There exists a K ×K orthogonal matrix Q such that both Y Q′ and QJK,mQ′ are non-negative. As a result, the NMF problem for Ω is solvable: Ω = ZPZ ′ with Z = Y Q′ and P = QJK,mQ′. We have several comments. First, Theorem 2.1 assumes two conditions: m ≤ K/2 and (2.9). When K ≤ 2, both conditions hold automatically, so the NMF problem is always solvable in this case; see Section 2.1. As far as we know, our proof is different from existing approaches. Second, in Theorem 2.1, we require y0 ∈ Sm. This may seem restrictive, but is not. This is because y0 is a vector we choose for our own convenience. In fact, one of the most interesting settings for NMF seems to be that in Section 2.3, where we choose y0 = (1, 0, . . . , 0)′, so the requirement is satisfied automatically. Also, when the last m entries of y0 are nonzero but sufficiently small, Theorem 2.1 continues to hold if we modify the term √ 1− 1/K slightly. Third, from a practical view point, the condition of m ≤ K/2 is mild: we rarely see a rank-K network model with m > K/2 (note here m can be estimated using the eigenvalues of the adjacency matrix A). For theoretical completeness, the case of m > K/2 is also interesting, but there does not exist an orthogonal matrix Q such that QJK,mQ′ is non-negative. This is because for any such Q, trace(QJK,mQ′) = K − 2m < 0. Therefore, we must find a different way to solve the NMF problem in this case. We discuss this in Section 2.4. Last, an interesting question is whether our idea is extendable to asymmetric NMF or complex NMF [19]. As a simple extension to asymmetric NMF, consider an n× p positive matrix Ω of rank-K. By SVD, Ω = Y Z ′ for an n×K matrix Y and a p×K matrix Z. Let y′i be i-th row of Y and z′j be the j-th row of Z, respectively. If there is a y0 ∈ SK,m such that for all i and j, |(yi, y0)|/‖yi‖ ≥ √ 1− 1/K and |(zj , y0)|/‖zj‖ ≥ √ 1− 1/K, then we can find a K ×K orthogonal matrix Q which rotates all rows of Y and Z to the first orthant simultaneously. In this case, the asymmetric NMF problem is solvable for Ω. For reasons of space, we leave further study along this line to the future. Our result is sharp for the constant √ 1− 1/K in (2.9) can not be further reduced. While we can show this for general K, we illustrate with the case of K = 2 for instruction purpose. In this case, we can rotate n unit-norm vectors y1, y2, . . . yn in R2 simultaneously to the first orthant if and only if there is a unit-norm vector y0 such that |(y0, yi)| ≥ √ 1− 1/2 (i.e., the angle between them is ≤ π/4) for all 1 ≤ i ≤ n. See Section 2.1 and Remark 3 for more discussion. Another way to see the sharpness is to consider the case of m = 0 (so Ω is positive semi-definite). In this case, condition (2.9) is hard to improve and is the weakest we have so far in the literature; see Remark 2. 2.1 The case of K = 2 In this case, the NMF problem is always solvable, as the two conditions of Theorem 2.1, m ≤ K/2 and (2.9), hold automatically. In fact, first, since Ω has at least one positive eigenvalues and K = 2, we have either m = 0 or m = 1, and so m ≤ K/2. Second, we can always find a y0 ∈ Sm such that (2.9) is satisfied. In detail, let 0 ≤ θi < 2π be the angle from e1 (e1 = (1, 0)) to yi counterclockwise, and let θmin and θmax be the smallest and largest values of all θi. Now, when m = 0, let y0 be the unit vector where the angle from e1 to y0 is (θmax + θmin)/2, counterclockwise. When m = 1, take y0 = (1, 0). The following theorem is proved in the supplement. Theorem 2.2 Fix K = 2, 0 ≤ m ≤ K − 1, n ≥ K, and let y0 be as above. In this case, m ≤ K/2 and (2.9) holds for the y0 above, so the NMF problem is always solvable for Ω. 2.2 When y0 is a scaled weighted average of yi’s For the y0 in (2.9), an interesting choice is to let it be proportional to a weighted average of yi’s. Call w ∈ Rn a weight vector if all of its entries are non-negative with a sum of 1. Recall that Ω = Y JK,mY ′. Define a proxy of Ω by Ω̃ = Y Y ′. Note that Ω̃ = Ω if m = 0. Introduce y(w) ∈ RK and β(w) ∈ Rn by y(w) = ∑n i=1 wiyi = Y ′w and β(w) = Ω̃w. Since Y is full rank, y(w) 6= 0. Take y0 = y(w)/‖y(w)‖. Condition (2.9) reduces to |β(w)i |/ √ Ω̃(i, i)(w′Ω̃w) ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.10) Theorem 2.3 Fix K ≥ 3, 0 ≤ m ≤ K/2, and n ≥ K. The NMF problem (1.1) is solvable for Ω if the last m entries of y(w) are 0 and (2.10) holds. Theorem 2.3 follows from Theorem 2.1 by direct calculations, so the proof is omitted. We require that the last m entries of y(w) are 0, for we need y0 ∈ Sm in Theorem 2.1. As explained before, this may seem restrictive, but it is not, as in the most interesting case to be discussed in Section 2.3, we take y(w) = (1, 0, . . . , 0), so the requirement is satisfied automatically. See details therein. When m = 0, Ω̃ = Ω, and β(w) = Ωw. In this case, condition (2.10) reduces to |β(w)i |/ √ Ω(i, i)(w′Ωw) ≥ √ 1− 1/K. (2.11) We have the following corollary, the proof of which is straightforwards so is omitted. Corollary 2.1 Fix n ≥ K ≥ 3. The NMF problem (1.1) is solvable for Ω if m = 0 and (2.11) holds. Remark 2. If we take w = n−11n, then (2.10) reduces to |βi|/ √ Ω(i, i)(1′nΩ1n) ≥ √ 1− 1/K with β = Ω1n, and Corollary 2.1 reduces to [26, Theorem 3.137], where m = 0 and Ω is positive semi-definite. Our setting is more general as Ω may have m negative eigenvalues for any m ≤ K/2. For the case of m = 0, [26, Theorem 3.137] (see also [27]) is by far the best results we can have. The book [26] presents several other results on this topic, but they need some conditions which are less intuitive or harder to check. Recall that the constant √ 1− 1/K in (2.9) can not be further reduced. These suggest that Theorem 2.1 is hard to improve and our results are sharp. Remark 3. (When can we rotate n vectors to the first orthant?) As a stylized application, consider the following problem. Let x1, x2, . . . , xn be n unit-norm vectors in RK , n ≥ K, and let αK(x1, x2, . . . , xn) = min1≤i,j≤n{(xi, xj)}. For what values of αK(x1, x2, . . . , xn) can we rotate all n points simultaneously to the first orthant? Let X = [x1, x2, . . . , xn]′ and assume X is full rank without loss of generality. The matrix Ω = XX ′ is symmetric and positive semi-definite. Let α∗K = 0 if K = 2 and α ∗ K = √ 1− 1/K if K ≥ 3. Applying Theorem 2.1 with m = 0, it follows that when αK(x1, x2, . . . , xn) ≥ α∗K , we can rotate all n points to the first orthant. Note that we can not do so if αK(x1, x2, . . . , xn) < 0. 2.3 When Y is constructed by the spectral decomposition of Ω So far, we have tried to keep our results as general as we can, and Y can be any matrix satisfying Ω = Y JK,mY ′. An interesting special case is when Y is constructed using the spectral decomposition of Ω, which we now discuss. For 1 ≤ k ≤ K, let λk be the k-th largest eigenvalue of Ω, and let ξk be the corresponding (unit-norm) eigenvector. In the literature λ1 and ξ1 are called the Perron root and Perron vector, respectively, where we can always assume all entries of ξ1 are positive since Ω is irreducible and non-negative (e.g., [26]). Write Ξ = [ξ1, ξ2, . . . , ξK ] and define the n× (K − 1) so-called matrix of entry-wise ratio R by R(i, k) = ξk+1(i)/ξ1(k), 1 ≤ k ≤ K − 1, 1 ≤ i ≤ n [12, 16]. Introduce D = diag(|λ1|, |λ2|, . . . , |λK |), D0 = diag(|λ2|, . . . , |λK |), (2.12) and write R = [r1, r2, . . . , rn] ′, Y = ΞD1/2 = [y1, y2, . . . , yn] ′. (2.13) By spectral decomposition, Ω = ΞD1/2JK,mD1/2Ξ′ = Y JK,mY ′. Now, in Section 2.2, if we take w = cξ1 where c = 1/‖ξ1‖1, then by basic algebra and definitions, it is seen y(w) = c √ λ1e1 and so y0 = e1 and especially y0 ∈ Sm. Moreover, β(w)i = cλ1ξ1(i), w′Ω̃w = c2λ1, and Ω̃(i, i) = y′iDyi. Combining these, condition (2.10) reduces to r′iD0ri ≡ K−1∑ k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1), for all 1 ≤ i ≤ n. (2.14) The following theorem is proved in the supplement. Theorem 2.4 Fix K ≥ 3, m ≤ K2 , and n ≥ K. The NMF problem (1.1) is solvable if (2.14) holds. Note that as in most works on NMF (e.g., [26]), the main goal is to find easy-to-check conditions under which the NMF is solvable. Such conditions are sufficient but are not necessary. 2.4 The case of m > K/2 So far, we have been focused on the case of m ≤ K/2, which is the case that is most frequently found in real networks. For completeness, we now consider the case where m > K/2. Since 0 ≤ m ≤ K−1, such a case only exists whenK ≥ 3. In Theorem 2.1, we show that whenm ≤ K/2, we can find an orthogonal matrixQ such thatQJK,mQ′ is non-negative. Whenm > K/2, we can not do this, as for any such Q, trace(QJK,mQ′) = (K − 2m) < 0. Therefore, we need a new approach. A convenient approach is to redefine JK,m where we select a subset of the positive diagonal entries of JK,m and add a positive number for each of them. Success has been shown in a related setting (e.g., [3]). Using such a trick, we can extend all our main results to the case of m > K/2. For reasons of space, we only consider an extension of Theorem 2.4, as the claim of the theorem is probably the most explicit. Also for reasons of space, we only consider the case where we add a number to the first diagonal entry of JK,m. Note that the idea is readily extendable to more general cases. Let Q be the set of all orthogonal matrices where the first column is K−1/2(1, 1, . . . , 1)′. Fix 1 ≤ m ≤ K − 1. For any Q ∈ Q, write Q = [Q(K−m), Q(m)], where Q(K−m) and Q(m) are the sub-matrix of Q consisting the first (K −m) columns and the other m columns, respectively. Introduce a constant by am = 1+K infQ∈Qmax1≤i,j≤K{H(i, j) : H = 2Q(m)(Q(m))′−IK}(IK : K ×K identity matrix). Theorem 2.5 extends Theorem 2.4 and is proved in the supplement. Theorem 2.5 Fix K ≥ 3, 0 ≤ m ≤ (K − 1), and n ≥ K. We have am = 1 if m ≤ K/2 and am = (K − 1) if m = K − 1. Also, the NMF problem is solvable for Ω if r′iD0ri ≡∑K−1 k=1 |λk+1|r2i (k) ≤ 1/[am(K − 1)] for all 1 ≤ i ≤ n. When m ≤ K/2, am = 1. In this case, the claim here is the same as that in Theorem 2.4. Remark 4. When the NMF problem for Ω is solvable, the solution is usually not unique without a proper regularity condition (e.g., [5]). In our setting, once we can write Ω = ΘΠPΠ′Θ for some non-negative matrices (Θ,Π, P ) as in (1.5), the factorization is unique if (a) for each 1 ≤ k ≤ K, there is at least one i such that πj = ek, where ek is the k-th standard Euclidean basis vector of RK , and (b) all diagonal entries of P are 1 (see [15, 16] for a proof). Remark 5. When condition (2.9) of Theorem 2.1 holds for some vectors y0, how to find such a y0 and the orthogonal matrix Q in Theorem 2.1 numerically? This is an interesting question and we discuss it in Section F of the supplement. 3 When is a rank-K network model also a DCMM model So far, we focus on general NMF settings where we show that the NMF problem (1.1) is solvable when, for example, (2.14) holds. We now apply the results to networks and study when we can rewrite a rank-K network model as a DCMM model. Network analysis (e.g., community detection, membership estimation, link prediction) is a well-studied area, where we have a lot of knowledge on what is the regime of major interest and what conditions are reasonable [16, 15, 18, 29]. In fact, in network analysis, we usually use an asymptotic framework where n→∞, K is fixed, and other parameters may vary with n, where it is quite acceptable to assume (A) all ‖ri‖ are bounded and (B) max2≤k≤K{|λk/λ1|} → 0; the notations are the same as those in Theorem 2.4. In fact, (A)-(B) model the most interesting regime in network analysis. In Theorem 2.4, the main condition (e.g., (2.14)) is r′iD0ri ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. Once (A)-(B) hold, (1/|λ1|)D0 → 0 and (2.14) holds, so we can always rewrite a rank-K network model as a DCMM model when (A)-(B) hold. The remaining question is then, why (A)-(B) are reasonable assumptions in network analysis, and why they model the most interesting regime in network analysis. We now explain these in details. Let Ω be the Bernoulli probability matrix as in (1.3). Suppose Ω = Y PY ′, where Y ∈ Rn,K and is full rank, P ∈ RK,K , and (Y, P ) are not necessarily non-negative. Denote G = Y ′Y . Note that G is a K ×K symmetric and positive definite matrix. Let G1/2 be the (unique) square root of Y ′Y . We usually assume Y is balanced in that (a) the `2-norm of all K columns are in the same order, and (b) no severe linearity between the K columns [15, 18]. As a result, all eigenvalues of G are at the same order. By basic algebra, there is a K×K orthogonal matrix Q such that Ξ = [ξ1, ξ2, . . . , ξK ] = Y B, where B = G−1/2Q. Write B = [b1, b2, . . . , bK ] and let 0 ≤ αi < 2π be the angle between b1 and yi (counterclockwise). Let M(Ω) = max{1≤i≤n}{1/| cos(αi)|} and define matrix V ∈ RK,K−1 by V (i, k) = bk+1(i)/b1(i), 1 ≤ i ≤ K, 1 ≤ k ≤ K − 1. (3.15) Write V = [v1, v2, . . . , vK ]′ so v′k is row-k of V , 1 ≤ k ≤ K. For any symmetrical matrix P , λk(P ) denotes the k-th largest eigenvalue; to be consistent with earlier notations, we simply write λk(Ω) as λk. Lemma 3.1 is proved in the supplement. Lemma 3.1 We have B = diag(b1)[1K , V ], P = Bdiag(λ1, . . . , λK)B′, b1 is an eigenvector of PG, and P (k, k) = b21(k)[λ1 + v ′ kdiag(λ2, . . . , λK)vk], 1 ≤ k ≤ K. Moreover, if as n → ∞, λ1(G) ≤ c0λK(G) for a constant c0 > 0, then condition (B) holds if and only if max2≤k≤K{|λk(P )/λ1(P )|} → 0, and max{1≤i≤n}{‖ri‖} ≤ CM(Ω). It is seen that conditions (A)-(B) hold if M(Ω) ≤ C and max2≤k≤K{|λk(P )/λ1(P )|} → 0. The first one is mild: it only requires that no yi is nearly orthogonal to b1. To boil these conditions down to a more explicit and vivid form, we consider the DCMM model. It is fine to consider the DCMM model here for (a) we only use the model to explain why conditions (A)-(B) are reasonable, and (b) the argument below is extendable beyond the DCMM model. In the DCMM model, Ω = ΘΠPΠ′Θ. Therefore, we can write Ω = Y PY ′ if we let Y = ΘΠ, where we note that (Y, P ) are non-negative. Recall that G = Y ′Y (a positive definite K ×K matrix). Lemma 3.2 is proved in the supplement. Lemma 3.2 If (Y, P ) are non-negative, then first, PG is an irreducible non-negative matrix and b1 is the Perron vector, so all entries of b1 are strictly positive. Second, all rows of ri lives with a simplex with v1, v2, . . . , vK being the vertices, so max{1≤i≤n}{‖ri‖} ≤ max{1≤k≤K}{‖vk‖}. Last, if λ1(G) ≤ c0λK(G), then max{1≤i≤n}{‖ri‖} ≤ CM(Ω) ≤ C max1≤k≤K{‖b1‖/b1(k)}. Now, first, in a DCMM model, the matrix P (k, `) measures the baseline probability where there is an edge between a node in community k and a node in community `. Therefore, the most difficult or most interesting case is where all P (k, `) have similar values. In this case, P is close to rank-1, or in other words, max2≤k≤K{|λk(P )/λ1(P )|} → 0, and so max2≤k≤K{|λk/λ1|} → 0. See for example [15, 18], where it was further pointed out that the most difficult case for network analysis is when max{2≤k≤K}{|λk|} ≤ Ln · √ λ1 for a multi-log(n) factor Ln. Therefore, condition (A) models the most difficult case of network analysis and so is of major interest. Moreover, by Lemma 3.2, max{1≤i≤n}{‖ri‖} ≤ C if all entries of b1 are at the same order. This is only a mild condition for b1 is the Perron vector of PG. Last, by Lemma 3.2, we also have max{1≤i≤n}{‖ri‖} ≤ C if we alternatively assume max{1≤k≤K}{‖vk‖} ≤ C. Recall that B = G−1/2Q = [b1, b2, . . . , bK ] and v′1, v ′ 2, . . . , v ′ K are rows of V formed by dividing b2, b3, . . . , bK by b1 entry-wise, where b1 is the Perron vector. Since G is positive definite where all eigenvalues are at the same order, Q is orthogonal, and V is properly scaled (and all of them have small-sizes), it is only a mild condition to assume max{1≤k≤K}{‖vk‖} ≤ C. These explain why conditions (A)-(B) are mild condition and they model the most challenging regime for network analysis. 4 Real data examples, and especially how to check condition (2.14) Let ai = (1/|λ1|)r′iD0ri, 1 ≤ i ≤ n. Condition (2.14) can be rewritten as ai ≤ 1/(K − 1), for all 1 ≤ i ≤ n. In applications, Ω is unknown, so it is unclear how to obtain ai. A straightforward approach is to estimate ai with the eigenvalues and eigenvectors of the adjacency matrix A, but the estimates may be too noisy. We propose the following approach, which is inspired by Lemmas 3.1-3.2 and the recent Mixed-SCORE approach [16]. Let (Y, V ) be as above. Mixed-SCORE suggests an interesting idea for estimating V and (a normalized version of) Y , denoted by Π; see details therein. Let λ̂k be the k-th eigenvalue of A and let ξ̂k be the corresponding eigenvector. Write Ξ̂ = [ξ̂1, ξ̂2, . . . , ξ̂K ] = [ẑ1, ẑ2, . . . , ẑn] ′, so ẑ′i is row-i of Ξ̂. Our approach runs as follows. • Apply Mixed-SCORE and obtain an estimate (V̂ , Π̂) for (V,Π). Let v̂′k be row k or V̂ and let π̂′i be row i of Π̂, 1 ≤ k ≤ K, 1 ≤ i ≤ n. • Estimate b1 by b̂1 where b̂1(k) = [λ̂1 + ∑K k=2 λ̂kv̂ ′ kdiag(λ̂2, . . . , λ̂K)v̂k] −1/2. Let B̂ = diag(b̂1)[1K , V̂ ], and estimate P by P̂ = B̂diag(λ̂1, λ̂2, . . . , λ̂K)B̂′. Let ŷi = (‖zi‖1/‖B̂′π̂i‖1)π̂i, 1 ≤ i ≤ n, and let Ŷ = [ŷ1, ŷ2, . . . , ŷn]′. • Let µ̂k be the k-th eigenvalue of the matrix Ω̂ = Ŷ P̂ Ŷ ′, and let η̂k be the corresponding eigenvector. In the definition of ai (see above and (2.14)), replace (λk, ξk) by (µ̂k, η̂k) and denote the resultant quantity âi, 1 ≤ i ≤ n. These are our estimates for ai. The approach can be shown to be consistent for Ω under some regularity conditions. We skip the study for it is beyond the scope of this paper. In this algorithm, (Ŷ , P̂ ) are not automatically non-negative, and to check whether NMF is solvable for Ω̂, we can check if âi ≤ 1/(K − 1), for all 1 ≤ i ≤ n. (4.16) Remark 6. Condition (2.14) of Theorem 2.4 is only a sufficient condition for NMF; they are not necessary conditions. It could happen that an NMF is solvable for an Ω but (2.14) does not hold. We now consider some real examples. The weblog is a well-known data set [22], where with some light preprocessing, the network has 1, 222 node (each is a blog) and 16, 714 edges (each is a two-way hyperlink). The network has two communities: democratic and republican. For this data set, a rank-2 model is appropriate, so we have (n,K) = (1, 222, 2) (e.g., [30, 12, 18]). Let Ω be the Bernoulli probability matrix as in (1.3). By Theorem 2.2, when K = 2, we can always decompose Ω as Ω = Y PY ′ for a non-negative n × 2 matrix Y and a 2 × 2 non-negative matrix P . Now, by the paragraph right above Remark 1, we can rewrite Ω = ΘΠPΠΘ as in (1.3), so Ω satisfies a DCMM model. Same claim can be drawn for the karate data set [30, 12], where we similarly have K = 2. As another example, we consider the UKFaculty network (e.g., see [17, Table 1]). It is reasonable to model the network with a rank-K model with (n,K) = (81, 3) and m ≤ K/2. By Theorem 2.4, the model can be rewritten as a DCMM model if (4.16) holds. Following the discussion above, we first obtain an estimate Ω̂ for Ω. We then use Ω̂ to obtain âi and check if (4.16) holds. The results are in Figure 1 (left) below, where the maximum of â1, â2, . . . , ân is slightly smaller than 0.5 (1/(K − 1) = 0.5 as K = 3), suggesting that (4.16) holds. Moreover, let µ̂k be the k-th eigenvector of Ω̂ and let η̂k be the corresponding eigenvector. Let D̂ = diag(µ̂1, . . . , µ̂K) and Ŷ = [η̂1, . . . , η̂K ]D̂1/2. We have Ω̂ = Ŷ JK,mŶ ′. Let Q be the 3 × 3 matrix where the three rows are (1/ √ 3, 1/ √ 6, 1/ √ 2), (1/ √ 3, 1/ √ 6,−1/ √ 2), and (1/ √ 3,−2/ √ 6, 0), respectively. Define Ẑ = Ŷ Q′. It is seen Ω̂ = Ŷ JK,mŶ ′ = Ẑ[QJK,mQ′]Ẑ ′, where QJK,mQ′ is seen to be non-negative. Moreover, for 1 ≤ i ≤ n, let ẑi be the smallest entry in row-i of Ẑ. Figure 1 (right) plots the histogram for {ẑi}ni=1. The results suggest that all ẑi are non-negative, so the matrix Ŷ Q′ is non-negative. Therefore, Ω̂ has an NFM by Ω̂ = Ẑ[QJK,mQ′]Ẑ ′. These suggest that for the UKFaculty data set, (4.16) holds and it is reasonable to model the UKFaculty with a DCMM model. In summary, in many recent works on network analysis, we frequently assume that a DCMM model holds for the settings at hand, but we rarely checked if such an assumption is valid. Our NMF results provide an approach to checking whether the network satisfies DCMM model. 5 Discussion We derive a sharp NMF result and apply it to network modeling. Both NMF and network analysis are important areas in machine learning, with applications in image processing, social media, NLP, and cancer study [5, 23, 21]. In comparison, NMF is more theoretically oriented and network analysis is more application oriented. Our paper makes an interesting connection of the two areas. On one hand, we find a new application of NMF theory. This may open the door for a line of research where we find new applications of NMF in areas such as text learning [21] and tensor analysis [14]. On the other hand, we gain valuable insight on what are the most suitable network models in applications. This is crucial, for a suitable model is the starting point for methods and theory. Our study may help researchers identify the right network models and so can channel their strengths to the right direction. Our work may also help develop new methods. For example, compared to the general rank-K model, the DCMM model has more structures which we can exploit (see [16, 18] where they discovered a simplex structure in the spectral domain, using some specific features which the DCMM model has but a general rank-K model does not). Our approach is useful for it ensures us that in certain settings, we can use a more specific model and exploit the structures the model provides. Another point is that, existing NMF theory usually requires some crucial conditions. However, whether such conditions are reasonable in real applications remains unclear, especially when the conditions are on matrices that are not directly observable. In Section 3-4, we tackle this problem by providing (a) a detailed explanation for why our NMF assumptions are reasonable in network analysis and (b) new ideas for checking the NMF conditions in real applications when the NMF conditions are on matrices that are not directly observable. We hope our efforts many spark some new research along this line. Acknowledgements. The research was supported in part by NSF Grant DMS-2015469. The author would like to thank Naomi Shaked-Monderer, Helena Smigoc, and Changqing Xu for helpful pointers, and Zheng Tracy Ke and Jiajun Tang for very helpful comments.
1. What is the focus and contribution of the paper regarding symmetric non-negative matrix factorization? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its algorithm and reference citations? 4. Do you have any suggestions for improving the paper's content, such as providing more theoretical support for the algorithm or moving certain sections to the supplementary material? 5. Are there any limitations to the paper's findings that should be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper deals with the problem of symmetric non-negative matrix factorization (NMF) for a low-rank matrix, when some of the eigenvalues of the matrix are negative. The paper provides conditions under which a symmetric NMF solution exists for a matrix of rank K with negative eigenvalues. The main theoretical results are divided into two main theorems, one for m<K/2 and another for general K. The paper also provides an algorithm for symmetric NMF for degree-corrected mixed-membership block model (DCMM). Strengths And Weaknesses Strength: The theoretical results (Theorem 2.1-2.5) on the conditions under which a symmetric NMF solution exists for a matrix of rank K with negative eigenvalues are the main strengths of the paper, as the results need some technical innovation. Weakness: The algorithm of NMF for DCMM is provided a bit lightly and without theoretical support, thus section 4 becomes quite antithetical to the tone of the rest of the paper. The paper also misses some relevant references, like, (i) Anandkumar, Animashree, Ge, Rong, Hsu, Daniel J, and Kakade, Sham M. A tensor approach to learning mixed membership community models. Journal of Machine Learning Research, 15(1):2239–2312, 2014. (ii) Mao, X., Sarkar, P. and Chakrabarti, D., 2017, July. On mixed memberships and symmetric nonnegative matrix factorizations. In International Conference on Machine Learning (pp. 2324-2333). PMLR. Questions Suggestion: Regarding section 4, it would be better to either put in Supplementary or give the theoretical support behind the estimated conditions. Limitations None noted.
NIPS
Title A sharp NMF result with applications in network modeling Abstract Given an n× n non-negative rank-K matrix Ω where m eigenvalues are negative, when can we write Ω = ZPZ ′ for non-negative matrices Z ∈ R and P ∈ R? While most existing works focused on the case of m = 0, our primary interest is on the case of general m. With new proof ideas, we present sharp results on when the NMF problem is solvable, which significantly extend existing results on this topic. The NMF problem is partially motivated by applications in network modeling. For a network with K communities, rank-K models are especially popular. The Degree-Corrected Mixed-Membership (DCMM) model is a recent rank-K model which is especially useful and interpretable in practice. To enjoy such properties, it is of interest to study when a rank-K model can be rewritten as a DCMM model. Using our NMF results, we show that for a rank-K model in the most interesting parameter ranges, we can always rewrite it as a DCMM model. 1 Introduction Fix (n,K,m) where n ≥ K ≥ 2 and 0 ≤ m ≤ K − 1. We are interested in the following Non-negative Matrix Factorization (NMF) problem. The NMF problem: given an n× n symmetric non-negative irreducible matrix Ω with rank K where exactly m of the K nonzero eigenvalues are negative, when can we find non-negative matrices Z ∈ Rn,K and P ∈ RK,K such that Ω = ZPZ ′? (1.1) Definition 1.1 We say a matrix Ω non-negative if all of its entries are non-negative, and we say it positive if all of its entries are (strictly) positive. We say the NMF problem is solvable for Ω is we can find non-negative matrices (Z,P ) as above such that Ω = ZPZ ′. We assume K ≥ 2 for the case of K = 1 is trivial, and we assume m ≤ K − 1 for an irreducible non-negative matrix has at least one positive eigenvalue (e.g., by Perron’s theorem [9]). NMF is a fundamental problem and has applications in areas such as image processing [5, 23], text learning [21], hyper-spectral unmixing, and social network analysis [13]. Our setting is a special case of NMF where both Ω and P are symmetric, so we may call it symmetric NMF. In the literature, symmetric NMF was widely used in clustering of nonlinearly separable data from a similarity matrix [7], where for a non-negative symmetric matrix Ω, it aims to find a non-negative matrix Z such that Ω = ZZ ′, where Z ∈ Rn,N and N ≥ K. (1.2) Note that, first, this implicitly requires that Ω is positive semi-definite. Second, it is understood that for many non-negative and positive semi-definite matrices Ω, the smallest N we can find in the 36th Conference on Neural Information Processing Systems (NeurIPS 2022). factorization of (1.2) is strictly larger than K (the rank of Ω). See the 2021 book by Shaked-Monderer and Berman [26]. The book is 551 pages and summarizes nicely most existing results on NMF. Unfortunately, our setting in (1.1) is significantly different from that in (1.2), so existing results on NMF do not directly apply. Especially, our NMF setting is motivated applications of social network modeling, where we must (a) allow Ω to have negative eigenvalues, (b) require that Z has exactly K columns (K = rank(Ω)), and (c) have a factorization of Ω = ZPZ ′ instead of Ω = ZZ ′ (we will soon see that both (P,Z) have practical meanings in our setting). Below, in Section 1.1, we introduce several recent network models. In Section 1.2, we explain why the NMF problem (1.1) is important and relevant in social network modeling. 1.1 Several recent rank-K network models, and especially the DCMM model Consider a symmetric connected network with n nodes and let A be the adjacency matrix, where A(i, j) = 1 if there is an edge connecting nodes i and j and A(i, j) = 0 otherwise. As a convention, we do not allow self edges, so all diagonal entries of A are 0. We assume the network has K perceivable communities (communities are scientifically meaningful but mathematically hard to define; intuitively, they are clusters of nodes that have more edges “within" than “across" [12, 30]): C1, C2, . . . , CK . In many network models, we assume that the upper triangular entries of A are independent Bernoulli random variables, and that there is an n× n non-negative matrix Ω such that Ω(i, j) = P(A(i, j) = 1) for all 1 ≤ i 6= j ≤ n. Let diag(Ω) ∈ Rn,n be the diagonal matrix where the i-th diagonal entry is Ω(i, i) and let W ∈ Rn,n be the matrix where W (i, j) = A(i, j)− Ω(i, j) if i 6= j and W (i, j) = 0 otherwise. The matrix W is known as the generalized Wigner matrix. With these notations, A = Ω− diag(Ω) +W. (1.3) We call Ω the Bernoulli probability matrix. Frequently, we assume a rank-K model for Ω: Ω is an irreducible non-negative matrix where the rank is K. (1.4) Note that K is the number of communities and has important practical meanings. Also, irreducibility is a natural assumption as we assume the network is connected (otherwise, we can study each connected component of the network separately). Below are some examples of rank-K models. Example 1 (RDPG Model). In a Random Dot Product Graph (RDPG) model [28], we fix a Kdimensional distribution F , generate yi iid∼ F , and let Ω(i, j) = (yi, yj) (inner product), 1 ≤ i, j ≤ n. If we write Y = [y1, y2, . . . , yn]′ (which is an n ×K matrix), then Ω = Y Y ′. The model is wellknown in network and graph modeling. However, a noteworthy issue is that, the matrix Ω defined in this way is always positive semi-definite. This makes the model relatively restrictive (e.g., [25]). Example 2 (GRDPG Model). To address the issue above, Rubin-Delanchy et al [25] proposed the generalized RDPG (GRDPG). Fix K and 0 ≤ m < K. Let JK,m = diag(1, 1, . . . ,−1, . . . ,−1) be the K ×K diagonal matrix where the first (K −m) diagonal entries are 1 and the remaining diagonal entries are −1. With a similar Y matrix as in RDPG, GRDPG assumes Ω = Y JK,mY ′. An Ω defined in this way has negative eigenvalues, but we have to choose (Y, JK,m) carefully to make sure that Ω is non-negative; this problem is not immediately clear. Example 3. It was argued (e.g., [4]) that the Bernoulli probability matrix Ω in a graphon model can be well-approximated by a low-rank matrix provided with some regularity conditions. In all these examples above, the parameters do not have explicit practical meanings (at least not directly or not sufficiently), so in a real application example, it remains unclear how to interpret the estimates of these parameters. Therefore, it is desirable to have models where the parameters have more explicit meanings in practice and so are easier to interpret. The Degree-Corrected Mixed-Membership (DCMM) model is one of such models. Proposed by [15] (see also [29]), the model is motivated by the observation that natural networks usually have severe degree heterogeneity and mixed-memberships. To accommodate both features, for each node i, 1 ≤ i ≤ n, we use a (strictly positive) parameter θi to model the degree heterogeneity and a weight vector πi ∈ RK to model the memberships, where πi(k) = weight node i puts in Ck, 1 ≤ k ≤ K. We call node i pure if πi is degenerate (i.e., only one entry is nonzero) and mixed otherwise. We also model the community structure by a symmetric and non-negative matrix P ∈ RK,K : P (k, `) = baseline probability where a node in Ck and a node in C` have an edge, 1 ≤ k, ` ≤ K. DCMM assumes that for all 1 ≤ i, j ≤ n, Ω(i, j) = θiθjπ′iPπj . If we let θ = (θ1, . . . , θn)′, Π = [π1, . . . , πn] ′, and Θ be the n× n diagonal matrix where Θ(i, i) = θi, 1 ≤ i ≤ n, then we have Ω = ΘΠPΠ′Θ, (1.5) Conventionally, we assume rank(Π) = rank(P ) = K, so DCMM is also a rank-K model. Remark 1. The DCMM model can be viewed as the extension of several models, including the classical block model. In fact, (a) DCMM reduces to Degree-Corrected Block Model (DCBM) [20] if all nodes are pure, (b) DCMM reduces to the Mixed-Membership Stochastic Block Model (MMSBM) [1, 2, 24] if all θi are equal, and (c) DCMM reduces to the classical Stochastic Block Model (SBM) [8] if all nodes are pure and all θi are equal (as above, node i is pure if πi is degenerate). 1.2 When is a rank-K network model also a DCMM model? A DCMM model is a rank-K model, but compared to other rank-K models, all parameter matrices (Θ,Π, P ) in the DCMM model have practical meanings and are easy to interpret. These make the DCMM model especially appealing in practice, and motivate the following problem: When is a rank-K network model also a DCMM model? (1.6) To explain why this is important, we use the dynamic co-citation networks in [11] (see also [10]) as an example. The paper presented 21 co-citation networks for the same set of nodes (i.e., authors) in statistics, each for a different time window. We are interested in (a) how many research areas in statistics, (b) what are baseline citation exchanges between different research areas, and (c) how the research interests of individual authors evolve over time. Here, a co-citation network is a symmetrized citation network where each node is an author, and two nodes have an edge if they have been co-cited for at least N times (for an N they picked) in the corresponding time window. The paper suggested that there are 3 primary research areas in statistics (which was interpreted as “Bayes", “Biostatistics", and “Non-parametric") and a handful of sub-areas, and that it is convenient to model each co-citation network by a DCMM model with K = 3. In detail, for each author i and time window t, 1 ≤ i ≤ n, 1 ≤ t ≤ T , they used a K ×K matrix P (t) to model the baseline citation exchanges between the primary research areas, a positive number θit to model the relative influence (in citations) of author i, and a weight vector πit to model the research interest of author i. If we similarly let Θ(t) = diag(θ1t, θ2t, . . . , θnt) and Π(t) = [π1t, π2t, . . . , πnt]′, then the Bernoulli probability matrix of the DCMM model at time t is Ω(t) = Θ(t)Π(t)P (t)(Π(t))′Θ(t). Using the DCMM model, they discovered a research triangle of statisticians (reminiscent of Efron’s triangle for statistical philosophy [6]), and used it to visualize the trajectories of research interests of a handful of individual authors. Imagine that, if we use a different rank-K model (e.g., GRDPG) to model these networks, say, with Ω(t) = Y (t)J (t)(Y (t))′ for some matrices (Y (t), J (t)). It is unclear how to relate Y (t) to baseline citation exchanges, research interests and relative influence of individual authors. This explains why (1.6) is of interest: given a rank-K network model, we wish to know when we can rewrite it as DCMM model, and so we can enjoy the properties and interpretability of the DCMM model. We now come back to (1.6). Seemingly, NMF is to key to answer this question. Consider a positive matrix Ω with rank K and suppose that it has an NMF as in (1.1) for two non-negative matrices Z ∈ Rn,K and P ∈ RK,K : Ω = ZPZ ′. Write Z = [z1, z2, . . . , zn]′ so z′i is the ith row. Without loss of generality, assume all zi are nonzero vectors. Let Θ(i, i) = ‖zi‖1 and πi = zi/‖zi‖1, 1 ≤ i ≤ n. It is seen that Θ(i, i) > 0, that each πi is a weight vector, and that Ω = ZPZ ′ = ΘΠPΠ′Θ. Therefore, we can always rewrite a rank-K model as a DCMM model if Ω has an NMF as in (1.1). This explains our motivation underline the NMF problem (1.1). Note that to answer the question in (1.1), a study on the NMF problem in (1.2) would be not be relevant. For example, in a DCMM model, K is the number of communities, so an NMF in (1.2) with an N > K would not be useful. For this reason, we have to focus on the NMF problem in (1.1). 1.3 Results and contributions Write Ω = Y JK,mY ′ as in Example 2, where JK,m = diag(1, . . . , 1,−1, . . . ,−1) is a K × K diagonal matrix and Y = [y1, y2, . . . , yn]′ ∈ Rn,K . Let λk be the k-th eigenvalue of Ω and let ξk be the corresponding eigenvector. For 1 ≤ i ≤ n, define ri ∈ RK−1 by ri(k) = ξk+1(i)/ξ1(i), 1 ≤ k ≤ K − 1. For any unit-norm vector y0 ∈ RK , let c(y0) = max{1≤i≤n}{|(yi, y0)|/‖yi‖}. In Section 2, we show that the NMF problem for Ω is solvable if m ≤ K/2 and c(y0) ≥ √ 1− 1/K for some y0; let us call this the main condition. We show that, in order for the NMF problem to be solvable, the constant √ 1− 1/K can not be further reduced. Therefore, in this sense, our results are sharp. Using this, we deduce several other results. Especially, we show that the NMF problem is solvable for Ω if ∑K−1 k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. We also extend our results to the case of m > K/2, and explain why we need a different proof in this case. In Section 3, we apply our results on NMF to network modeling. We argue that for parameters in the most interesting range, we have (A) all ‖ri‖ are bounded, and (B) max2≤k≤K{|λk/λ1|} → 0, and so the condition just mentioned holds. This implies that we can alway rewrite a rank-K network model as a DCMM model if the parameters are in the most interesting range. We also discuss how to check the main condition in practice where Ω is unknown. We tackle this by proposing an approach to estimating Ω, and support our results by some real networks. Our contributions are two fold. First, we develop several new results on symmetric NMF (a problem of interest in many applications [5]). Existing works on symmetric NMF have been focused on the case of m = 0 (so Ω is positive semi-definite; m is the number of negative eigenvalues of Ω). In this case, the best result is seen to be [26, Theorem 3.137], which can be viewed as a special case of our results; see Remark 2. This suggests that our results are sharp, for they are hard to improve even in the special case of m = 0. Note that our case allows m to take any possible values, so it is clearly harder to study. For example, to show the results for the case of m = 0, it suffices if we can find a K ×K orthogonal matrix Q such that Y Q′ is non-negative, since JK,m is the identity matrix in this case. For our case, we must find a Q such that Y Q′ and QJK,mQ′ are simultaneously non-negative. Clearly, this requires new ideas. We tackle this by constructing a special class of matrices Q; see our proofs for details. Our approach is quite different from that of [26, Theorem 3.137] and is new. Second, we shed interesting new light on different rank-K network models. In the literature, it is not unusual that many similar models are proposed for the same type of data sets. But in the end, we need to understand the advantages and disadvantages of different models, and pick the most suitable one. Our study recommends DCMM model, for it offers desired practical interpretability which other rank-K models do not have, and points out that a general rank-K model is also a DCMM model if the parameters are in the most interesting range. Such findings are valuable for they can help us identify the most suitable models in real applications. Notations. We denote e1, e2, . . . , eK by the standard basis vectors of K-dimensional Euclidean space and e0 = K−1/2(e1 + e2 + . . .+ eK). For any q > 0 and vector x, ‖x‖q denotes the `q-norm (when q = 2, we drop the subscript and write ‖x‖). For any two vectors x and y of the same dimension, (x, y) denotes the inner product. For a vector a ∈ Rn, diag(a) denotes the n×n diagonal matrix where the i-th diagonal entry is ai, 1 ≤ i ≤ n. When Ω is an n× n matrix, diag(Ω) denotes the n× n diagonal matrix where the i-th entry is Ω(i, i), 1 ≤ i ≤ n. 2 Main results on NMF This section presents our results on NMF. Results on network modeling are in Section 3. Consider an n× n irreducible non-negative matrix Ω with rank K, where n is usually much larger than K. By Perron’s theorem [9], at least one eigenvalue of Ω is positive. Fix 0 ≤ m ≤ K − 1 and suppose Ω has m negative eigenvalues. Let JK,m = diag(1, . . . , 1,−1, . . . ,−1) be the K ×K diagonal matrix as in Example 2. By basic algebra, we can always write Ω = Y JK,mY ′, for a full rank matrix Y ∈ Rn,K . (2.7) We can also show (e.g., an exercise with the Weyl’s theorem [9]) that for any matrix as in (2.7), the numbers of positive and negative eigenvalues are (K −m) and m, respectively. Write Y = [y1, y2, . . . , yn] ′, so that y′i is row i of Y , 1 ≤ i ≤ n. (2.8) Define the subset of K-dimensional vectors that live on the unit-sphere where the last m entries are 0: SK,m = {x = (x1, . . . , xK)′ ∈ RK , ‖x‖ = 1, xK−m+1 = . . . = xK = 0}. When m = 0, Sm is the unit sphere of RK . The following theorem is proved in the supplement. Theorem 2.1 Fix K ≥ 2, n ≥ K, and 0 ≤ m ≤ K/2. Consider the NMF problem (1.1) where Ω = Y JK,mY ′ and Y are as in (2.7). Suppose there is a vector y0 ∈ SK,m such that |(y0, yi)|/‖yi‖ ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.9) There exists a K ×K orthogonal matrix Q such that both Y Q′ and QJK,mQ′ are non-negative. As a result, the NMF problem for Ω is solvable: Ω = ZPZ ′ with Z = Y Q′ and P = QJK,mQ′. We have several comments. First, Theorem 2.1 assumes two conditions: m ≤ K/2 and (2.9). When K ≤ 2, both conditions hold automatically, so the NMF problem is always solvable in this case; see Section 2.1. As far as we know, our proof is different from existing approaches. Second, in Theorem 2.1, we require y0 ∈ Sm. This may seem restrictive, but is not. This is because y0 is a vector we choose for our own convenience. In fact, one of the most interesting settings for NMF seems to be that in Section 2.3, where we choose y0 = (1, 0, . . . , 0)′, so the requirement is satisfied automatically. Also, when the last m entries of y0 are nonzero but sufficiently small, Theorem 2.1 continues to hold if we modify the term √ 1− 1/K slightly. Third, from a practical view point, the condition of m ≤ K/2 is mild: we rarely see a rank-K network model with m > K/2 (note here m can be estimated using the eigenvalues of the adjacency matrix A). For theoretical completeness, the case of m > K/2 is also interesting, but there does not exist an orthogonal matrix Q such that QJK,mQ′ is non-negative. This is because for any such Q, trace(QJK,mQ′) = K − 2m < 0. Therefore, we must find a different way to solve the NMF problem in this case. We discuss this in Section 2.4. Last, an interesting question is whether our idea is extendable to asymmetric NMF or complex NMF [19]. As a simple extension to asymmetric NMF, consider an n× p positive matrix Ω of rank-K. By SVD, Ω = Y Z ′ for an n×K matrix Y and a p×K matrix Z. Let y′i be i-th row of Y and z′j be the j-th row of Z, respectively. If there is a y0 ∈ SK,m such that for all i and j, |(yi, y0)|/‖yi‖ ≥ √ 1− 1/K and |(zj , y0)|/‖zj‖ ≥ √ 1− 1/K, then we can find a K ×K orthogonal matrix Q which rotates all rows of Y and Z to the first orthant simultaneously. In this case, the asymmetric NMF problem is solvable for Ω. For reasons of space, we leave further study along this line to the future. Our result is sharp for the constant √ 1− 1/K in (2.9) can not be further reduced. While we can show this for general K, we illustrate with the case of K = 2 for instruction purpose. In this case, we can rotate n unit-norm vectors y1, y2, . . . yn in R2 simultaneously to the first orthant if and only if there is a unit-norm vector y0 such that |(y0, yi)| ≥ √ 1− 1/2 (i.e., the angle between them is ≤ π/4) for all 1 ≤ i ≤ n. See Section 2.1 and Remark 3 for more discussion. Another way to see the sharpness is to consider the case of m = 0 (so Ω is positive semi-definite). In this case, condition (2.9) is hard to improve and is the weakest we have so far in the literature; see Remark 2. 2.1 The case of K = 2 In this case, the NMF problem is always solvable, as the two conditions of Theorem 2.1, m ≤ K/2 and (2.9), hold automatically. In fact, first, since Ω has at least one positive eigenvalues and K = 2, we have either m = 0 or m = 1, and so m ≤ K/2. Second, we can always find a y0 ∈ Sm such that (2.9) is satisfied. In detail, let 0 ≤ θi < 2π be the angle from e1 (e1 = (1, 0)) to yi counterclockwise, and let θmin and θmax be the smallest and largest values of all θi. Now, when m = 0, let y0 be the unit vector where the angle from e1 to y0 is (θmax + θmin)/2, counterclockwise. When m = 1, take y0 = (1, 0). The following theorem is proved in the supplement. Theorem 2.2 Fix K = 2, 0 ≤ m ≤ K − 1, n ≥ K, and let y0 be as above. In this case, m ≤ K/2 and (2.9) holds for the y0 above, so the NMF problem is always solvable for Ω. 2.2 When y0 is a scaled weighted average of yi’s For the y0 in (2.9), an interesting choice is to let it be proportional to a weighted average of yi’s. Call w ∈ Rn a weight vector if all of its entries are non-negative with a sum of 1. Recall that Ω = Y JK,mY ′. Define a proxy of Ω by Ω̃ = Y Y ′. Note that Ω̃ = Ω if m = 0. Introduce y(w) ∈ RK and β(w) ∈ Rn by y(w) = ∑n i=1 wiyi = Y ′w and β(w) = Ω̃w. Since Y is full rank, y(w) 6= 0. Take y0 = y(w)/‖y(w)‖. Condition (2.9) reduces to |β(w)i |/ √ Ω̃(i, i)(w′Ω̃w) ≥ √ 1− 1/K, for all 1 ≤ i ≤ n. (2.10) Theorem 2.3 Fix K ≥ 3, 0 ≤ m ≤ K/2, and n ≥ K. The NMF problem (1.1) is solvable for Ω if the last m entries of y(w) are 0 and (2.10) holds. Theorem 2.3 follows from Theorem 2.1 by direct calculations, so the proof is omitted. We require that the last m entries of y(w) are 0, for we need y0 ∈ Sm in Theorem 2.1. As explained before, this may seem restrictive, but it is not, as in the most interesting case to be discussed in Section 2.3, we take y(w) = (1, 0, . . . , 0), so the requirement is satisfied automatically. See details therein. When m = 0, Ω̃ = Ω, and β(w) = Ωw. In this case, condition (2.10) reduces to |β(w)i |/ √ Ω(i, i)(w′Ωw) ≥ √ 1− 1/K. (2.11) We have the following corollary, the proof of which is straightforwards so is omitted. Corollary 2.1 Fix n ≥ K ≥ 3. The NMF problem (1.1) is solvable for Ω if m = 0 and (2.11) holds. Remark 2. If we take w = n−11n, then (2.10) reduces to |βi|/ √ Ω(i, i)(1′nΩ1n) ≥ √ 1− 1/K with β = Ω1n, and Corollary 2.1 reduces to [26, Theorem 3.137], where m = 0 and Ω is positive semi-definite. Our setting is more general as Ω may have m negative eigenvalues for any m ≤ K/2. For the case of m = 0, [26, Theorem 3.137] (see also [27]) is by far the best results we can have. The book [26] presents several other results on this topic, but they need some conditions which are less intuitive or harder to check. Recall that the constant √ 1− 1/K in (2.9) can not be further reduced. These suggest that Theorem 2.1 is hard to improve and our results are sharp. Remark 3. (When can we rotate n vectors to the first orthant?) As a stylized application, consider the following problem. Let x1, x2, . . . , xn be n unit-norm vectors in RK , n ≥ K, and let αK(x1, x2, . . . , xn) = min1≤i,j≤n{(xi, xj)}. For what values of αK(x1, x2, . . . , xn) can we rotate all n points simultaneously to the first orthant? Let X = [x1, x2, . . . , xn]′ and assume X is full rank without loss of generality. The matrix Ω = XX ′ is symmetric and positive semi-definite. Let α∗K = 0 if K = 2 and α ∗ K = √ 1− 1/K if K ≥ 3. Applying Theorem 2.1 with m = 0, it follows that when αK(x1, x2, . . . , xn) ≥ α∗K , we can rotate all n points to the first orthant. Note that we can not do so if αK(x1, x2, . . . , xn) < 0. 2.3 When Y is constructed by the spectral decomposition of Ω So far, we have tried to keep our results as general as we can, and Y can be any matrix satisfying Ω = Y JK,mY ′. An interesting special case is when Y is constructed using the spectral decomposition of Ω, which we now discuss. For 1 ≤ k ≤ K, let λk be the k-th largest eigenvalue of Ω, and let ξk be the corresponding (unit-norm) eigenvector. In the literature λ1 and ξ1 are called the Perron root and Perron vector, respectively, where we can always assume all entries of ξ1 are positive since Ω is irreducible and non-negative (e.g., [26]). Write Ξ = [ξ1, ξ2, . . . , ξK ] and define the n× (K − 1) so-called matrix of entry-wise ratio R by R(i, k) = ξk+1(i)/ξ1(k), 1 ≤ k ≤ K − 1, 1 ≤ i ≤ n [12, 16]. Introduce D = diag(|λ1|, |λ2|, . . . , |λK |), D0 = diag(|λ2|, . . . , |λK |), (2.12) and write R = [r1, r2, . . . , rn] ′, Y = ΞD1/2 = [y1, y2, . . . , yn] ′. (2.13) By spectral decomposition, Ω = ΞD1/2JK,mD1/2Ξ′ = Y JK,mY ′. Now, in Section 2.2, if we take w = cξ1 where c = 1/‖ξ1‖1, then by basic algebra and definitions, it is seen y(w) = c √ λ1e1 and so y0 = e1 and especially y0 ∈ Sm. Moreover, β(w)i = cλ1ξ1(i), w′Ω̃w = c2λ1, and Ω̃(i, i) = y′iDyi. Combining these, condition (2.10) reduces to r′iD0ri ≡ K−1∑ k=1 (|λk+1| · r2i (k)) ≤ |λ1|/(K − 1), for all 1 ≤ i ≤ n. (2.14) The following theorem is proved in the supplement. Theorem 2.4 Fix K ≥ 3, m ≤ K2 , and n ≥ K. The NMF problem (1.1) is solvable if (2.14) holds. Note that as in most works on NMF (e.g., [26]), the main goal is to find easy-to-check conditions under which the NMF is solvable. Such conditions are sufficient but are not necessary. 2.4 The case of m > K/2 So far, we have been focused on the case of m ≤ K/2, which is the case that is most frequently found in real networks. For completeness, we now consider the case where m > K/2. Since 0 ≤ m ≤ K−1, such a case only exists whenK ≥ 3. In Theorem 2.1, we show that whenm ≤ K/2, we can find an orthogonal matrixQ such thatQJK,mQ′ is non-negative. Whenm > K/2, we can not do this, as for any such Q, trace(QJK,mQ′) = (K − 2m) < 0. Therefore, we need a new approach. A convenient approach is to redefine JK,m where we select a subset of the positive diagonal entries of JK,m and add a positive number for each of them. Success has been shown in a related setting (e.g., [3]). Using such a trick, we can extend all our main results to the case of m > K/2. For reasons of space, we only consider an extension of Theorem 2.4, as the claim of the theorem is probably the most explicit. Also for reasons of space, we only consider the case where we add a number to the first diagonal entry of JK,m. Note that the idea is readily extendable to more general cases. Let Q be the set of all orthogonal matrices where the first column is K−1/2(1, 1, . . . , 1)′. Fix 1 ≤ m ≤ K − 1. For any Q ∈ Q, write Q = [Q(K−m), Q(m)], where Q(K−m) and Q(m) are the sub-matrix of Q consisting the first (K −m) columns and the other m columns, respectively. Introduce a constant by am = 1+K infQ∈Qmax1≤i,j≤K{H(i, j) : H = 2Q(m)(Q(m))′−IK}(IK : K ×K identity matrix). Theorem 2.5 extends Theorem 2.4 and is proved in the supplement. Theorem 2.5 Fix K ≥ 3, 0 ≤ m ≤ (K − 1), and n ≥ K. We have am = 1 if m ≤ K/2 and am = (K − 1) if m = K − 1. Also, the NMF problem is solvable for Ω if r′iD0ri ≡∑K−1 k=1 |λk+1|r2i (k) ≤ 1/[am(K − 1)] for all 1 ≤ i ≤ n. When m ≤ K/2, am = 1. In this case, the claim here is the same as that in Theorem 2.4. Remark 4. When the NMF problem for Ω is solvable, the solution is usually not unique without a proper regularity condition (e.g., [5]). In our setting, once we can write Ω = ΘΠPΠ′Θ for some non-negative matrices (Θ,Π, P ) as in (1.5), the factorization is unique if (a) for each 1 ≤ k ≤ K, there is at least one i such that πj = ek, where ek is the k-th standard Euclidean basis vector of RK , and (b) all diagonal entries of P are 1 (see [15, 16] for a proof). Remark 5. When condition (2.9) of Theorem 2.1 holds for some vectors y0, how to find such a y0 and the orthogonal matrix Q in Theorem 2.1 numerically? This is an interesting question and we discuss it in Section F of the supplement. 3 When is a rank-K network model also a DCMM model So far, we focus on general NMF settings where we show that the NMF problem (1.1) is solvable when, for example, (2.14) holds. We now apply the results to networks and study when we can rewrite a rank-K network model as a DCMM model. Network analysis (e.g., community detection, membership estimation, link prediction) is a well-studied area, where we have a lot of knowledge on what is the regime of major interest and what conditions are reasonable [16, 15, 18, 29]. In fact, in network analysis, we usually use an asymptotic framework where n→∞, K is fixed, and other parameters may vary with n, where it is quite acceptable to assume (A) all ‖ri‖ are bounded and (B) max2≤k≤K{|λk/λ1|} → 0; the notations are the same as those in Theorem 2.4. In fact, (A)-(B) model the most interesting regime in network analysis. In Theorem 2.4, the main condition (e.g., (2.14)) is r′iD0ri ≤ |λ1|/(K − 1) for all 1 ≤ i ≤ n. Once (A)-(B) hold, (1/|λ1|)D0 → 0 and (2.14) holds, so we can always rewrite a rank-K network model as a DCMM model when (A)-(B) hold. The remaining question is then, why (A)-(B) are reasonable assumptions in network analysis, and why they model the most interesting regime in network analysis. We now explain these in details. Let Ω be the Bernoulli probability matrix as in (1.3). Suppose Ω = Y PY ′, where Y ∈ Rn,K and is full rank, P ∈ RK,K , and (Y, P ) are not necessarily non-negative. Denote G = Y ′Y . Note that G is a K ×K symmetric and positive definite matrix. Let G1/2 be the (unique) square root of Y ′Y . We usually assume Y is balanced in that (a) the `2-norm of all K columns are in the same order, and (b) no severe linearity between the K columns [15, 18]. As a result, all eigenvalues of G are at the same order. By basic algebra, there is a K×K orthogonal matrix Q such that Ξ = [ξ1, ξ2, . . . , ξK ] = Y B, where B = G−1/2Q. Write B = [b1, b2, . . . , bK ] and let 0 ≤ αi < 2π be the angle between b1 and yi (counterclockwise). Let M(Ω) = max{1≤i≤n}{1/| cos(αi)|} and define matrix V ∈ RK,K−1 by V (i, k) = bk+1(i)/b1(i), 1 ≤ i ≤ K, 1 ≤ k ≤ K − 1. (3.15) Write V = [v1, v2, . . . , vK ]′ so v′k is row-k of V , 1 ≤ k ≤ K. For any symmetrical matrix P , λk(P ) denotes the k-th largest eigenvalue; to be consistent with earlier notations, we simply write λk(Ω) as λk. Lemma 3.1 is proved in the supplement. Lemma 3.1 We have B = diag(b1)[1K , V ], P = Bdiag(λ1, . . . , λK)B′, b1 is an eigenvector of PG, and P (k, k) = b21(k)[λ1 + v ′ kdiag(λ2, . . . , λK)vk], 1 ≤ k ≤ K. Moreover, if as n → ∞, λ1(G) ≤ c0λK(G) for a constant c0 > 0, then condition (B) holds if and only if max2≤k≤K{|λk(P )/λ1(P )|} → 0, and max{1≤i≤n}{‖ri‖} ≤ CM(Ω). It is seen that conditions (A)-(B) hold if M(Ω) ≤ C and max2≤k≤K{|λk(P )/λ1(P )|} → 0. The first one is mild: it only requires that no yi is nearly orthogonal to b1. To boil these conditions down to a more explicit and vivid form, we consider the DCMM model. It is fine to consider the DCMM model here for (a) we only use the model to explain why conditions (A)-(B) are reasonable, and (b) the argument below is extendable beyond the DCMM model. In the DCMM model, Ω = ΘΠPΠ′Θ. Therefore, we can write Ω = Y PY ′ if we let Y = ΘΠ, where we note that (Y, P ) are non-negative. Recall that G = Y ′Y (a positive definite K ×K matrix). Lemma 3.2 is proved in the supplement. Lemma 3.2 If (Y, P ) are non-negative, then first, PG is an irreducible non-negative matrix and b1 is the Perron vector, so all entries of b1 are strictly positive. Second, all rows of ri lives with a simplex with v1, v2, . . . , vK being the vertices, so max{1≤i≤n}{‖ri‖} ≤ max{1≤k≤K}{‖vk‖}. Last, if λ1(G) ≤ c0λK(G), then max{1≤i≤n}{‖ri‖} ≤ CM(Ω) ≤ C max1≤k≤K{‖b1‖/b1(k)}. Now, first, in a DCMM model, the matrix P (k, `) measures the baseline probability where there is an edge between a node in community k and a node in community `. Therefore, the most difficult or most interesting case is where all P (k, `) have similar values. In this case, P is close to rank-1, or in other words, max2≤k≤K{|λk(P )/λ1(P )|} → 0, and so max2≤k≤K{|λk/λ1|} → 0. See for example [15, 18], where it was further pointed out that the most difficult case for network analysis is when max{2≤k≤K}{|λk|} ≤ Ln · √ λ1 for a multi-log(n) factor Ln. Therefore, condition (A) models the most difficult case of network analysis and so is of major interest. Moreover, by Lemma 3.2, max{1≤i≤n}{‖ri‖} ≤ C if all entries of b1 are at the same order. This is only a mild condition for b1 is the Perron vector of PG. Last, by Lemma 3.2, we also have max{1≤i≤n}{‖ri‖} ≤ C if we alternatively assume max{1≤k≤K}{‖vk‖} ≤ C. Recall that B = G−1/2Q = [b1, b2, . . . , bK ] and v′1, v ′ 2, . . . , v ′ K are rows of V formed by dividing b2, b3, . . . , bK by b1 entry-wise, where b1 is the Perron vector. Since G is positive definite where all eigenvalues are at the same order, Q is orthogonal, and V is properly scaled (and all of them have small-sizes), it is only a mild condition to assume max{1≤k≤K}{‖vk‖} ≤ C. These explain why conditions (A)-(B) are mild condition and they model the most challenging regime for network analysis. 4 Real data examples, and especially how to check condition (2.14) Let ai = (1/|λ1|)r′iD0ri, 1 ≤ i ≤ n. Condition (2.14) can be rewritten as ai ≤ 1/(K − 1), for all 1 ≤ i ≤ n. In applications, Ω is unknown, so it is unclear how to obtain ai. A straightforward approach is to estimate ai with the eigenvalues and eigenvectors of the adjacency matrix A, but the estimates may be too noisy. We propose the following approach, which is inspired by Lemmas 3.1-3.2 and the recent Mixed-SCORE approach [16]. Let (Y, V ) be as above. Mixed-SCORE suggests an interesting idea for estimating V and (a normalized version of) Y , denoted by Π; see details therein. Let λ̂k be the k-th eigenvalue of A and let ξ̂k be the corresponding eigenvector. Write Ξ̂ = [ξ̂1, ξ̂2, . . . , ξ̂K ] = [ẑ1, ẑ2, . . . , ẑn] ′, so ẑ′i is row-i of Ξ̂. Our approach runs as follows. • Apply Mixed-SCORE and obtain an estimate (V̂ , Π̂) for (V,Π). Let v̂′k be row k or V̂ and let π̂′i be row i of Π̂, 1 ≤ k ≤ K, 1 ≤ i ≤ n. • Estimate b1 by b̂1 where b̂1(k) = [λ̂1 + ∑K k=2 λ̂kv̂ ′ kdiag(λ̂2, . . . , λ̂K)v̂k] −1/2. Let B̂ = diag(b̂1)[1K , V̂ ], and estimate P by P̂ = B̂diag(λ̂1, λ̂2, . . . , λ̂K)B̂′. Let ŷi = (‖zi‖1/‖B̂′π̂i‖1)π̂i, 1 ≤ i ≤ n, and let Ŷ = [ŷ1, ŷ2, . . . , ŷn]′. • Let µ̂k be the k-th eigenvalue of the matrix Ω̂ = Ŷ P̂ Ŷ ′, and let η̂k be the corresponding eigenvector. In the definition of ai (see above and (2.14)), replace (λk, ξk) by (µ̂k, η̂k) and denote the resultant quantity âi, 1 ≤ i ≤ n. These are our estimates for ai. The approach can be shown to be consistent for Ω under some regularity conditions. We skip the study for it is beyond the scope of this paper. In this algorithm, (Ŷ , P̂ ) are not automatically non-negative, and to check whether NMF is solvable for Ω̂, we can check if âi ≤ 1/(K − 1), for all 1 ≤ i ≤ n. (4.16) Remark 6. Condition (2.14) of Theorem 2.4 is only a sufficient condition for NMF; they are not necessary conditions. It could happen that an NMF is solvable for an Ω but (2.14) does not hold. We now consider some real examples. The weblog is a well-known data set [22], where with some light preprocessing, the network has 1, 222 node (each is a blog) and 16, 714 edges (each is a two-way hyperlink). The network has two communities: democratic and republican. For this data set, a rank-2 model is appropriate, so we have (n,K) = (1, 222, 2) (e.g., [30, 12, 18]). Let Ω be the Bernoulli probability matrix as in (1.3). By Theorem 2.2, when K = 2, we can always decompose Ω as Ω = Y PY ′ for a non-negative n × 2 matrix Y and a 2 × 2 non-negative matrix P . Now, by the paragraph right above Remark 1, we can rewrite Ω = ΘΠPΠΘ as in (1.3), so Ω satisfies a DCMM model. Same claim can be drawn for the karate data set [30, 12], where we similarly have K = 2. As another example, we consider the UKFaculty network (e.g., see [17, Table 1]). It is reasonable to model the network with a rank-K model with (n,K) = (81, 3) and m ≤ K/2. By Theorem 2.4, the model can be rewritten as a DCMM model if (4.16) holds. Following the discussion above, we first obtain an estimate Ω̂ for Ω. We then use Ω̂ to obtain âi and check if (4.16) holds. The results are in Figure 1 (left) below, where the maximum of â1, â2, . . . , ân is slightly smaller than 0.5 (1/(K − 1) = 0.5 as K = 3), suggesting that (4.16) holds. Moreover, let µ̂k be the k-th eigenvector of Ω̂ and let η̂k be the corresponding eigenvector. Let D̂ = diag(µ̂1, . . . , µ̂K) and Ŷ = [η̂1, . . . , η̂K ]D̂1/2. We have Ω̂ = Ŷ JK,mŶ ′. Let Q be the 3 × 3 matrix where the three rows are (1/ √ 3, 1/ √ 6, 1/ √ 2), (1/ √ 3, 1/ √ 6,−1/ √ 2), and (1/ √ 3,−2/ √ 6, 0), respectively. Define Ẑ = Ŷ Q′. It is seen Ω̂ = Ŷ JK,mŶ ′ = Ẑ[QJK,mQ′]Ẑ ′, where QJK,mQ′ is seen to be non-negative. Moreover, for 1 ≤ i ≤ n, let ẑi be the smallest entry in row-i of Ẑ. Figure 1 (right) plots the histogram for {ẑi}ni=1. The results suggest that all ẑi are non-negative, so the matrix Ŷ Q′ is non-negative. Therefore, Ω̂ has an NFM by Ω̂ = Ẑ[QJK,mQ′]Ẑ ′. These suggest that for the UKFaculty data set, (4.16) holds and it is reasonable to model the UKFaculty with a DCMM model. In summary, in many recent works on network analysis, we frequently assume that a DCMM model holds for the settings at hand, but we rarely checked if such an assumption is valid. Our NMF results provide an approach to checking whether the network satisfies DCMM model. 5 Discussion We derive a sharp NMF result and apply it to network modeling. Both NMF and network analysis are important areas in machine learning, with applications in image processing, social media, NLP, and cancer study [5, 23, 21]. In comparison, NMF is more theoretically oriented and network analysis is more application oriented. Our paper makes an interesting connection of the two areas. On one hand, we find a new application of NMF theory. This may open the door for a line of research where we find new applications of NMF in areas such as text learning [21] and tensor analysis [14]. On the other hand, we gain valuable insight on what are the most suitable network models in applications. This is crucial, for a suitable model is the starting point for methods and theory. Our study may help researchers identify the right network models and so can channel their strengths to the right direction. Our work may also help develop new methods. For example, compared to the general rank-K model, the DCMM model has more structures which we can exploit (see [16, 18] where they discovered a simplex structure in the spectral domain, using some specific features which the DCMM model has but a general rank-K model does not). Our approach is useful for it ensures us that in certain settings, we can use a more specific model and exploit the structures the model provides. Another point is that, existing NMF theory usually requires some crucial conditions. However, whether such conditions are reasonable in real applications remains unclear, especially when the conditions are on matrices that are not directly observable. In Section 3-4, we tackle this problem by providing (a) a detailed explanation for why our NMF assumptions are reasonable in network analysis and (b) new ideas for checking the NMF conditions in real applications when the NMF conditions are on matrices that are not directly observable. We hope our efforts many spark some new research along this line. Acknowledgements. The research was supported in part by NSF Grant DMS-2015469. The author would like to thank Naomi Shaked-Monderer, Helena Smigoc, and Changqing Xu for helpful pointers, and Zheng Tracy Ke and Jiajun Tang for very helpful comments.
1. What is the focus and contribution of the paper on non-negative factorization? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. Do you have any concerns or questions regarding the paper, especially regarding the solvability and uniqueness of the solution, and the notation used? 4. What are the limitations of the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors discuss when a square symmetric non-negative matrix with m negative eigenvalues allows a non-negative factorization of the form Z P Z ′ . The authors' main result is Theorem 2.1 where they show a set of sufficient conditions under which the NMF problem is solvable. The authors also extend the results for some special cases. Strengths And Weaknesses The paper is well-written and also provides nice examples to motivate the problem. However, I feel that there are several questions that are not answered properly and are mentioned below. Questions Here, the paper discusses the solvability of the NMF problem. However, of equal importance is the question of when the solution is unique and can be recovered by an algorithm. The paper does not answer these questions for problem 1.1 when the problem is known to be solvable. How did J_{k,m} still come up in the spectral decomposition of \Omega? \lambda and \Zeta are eigenvalues and eigenvectors of the symmetric matrix \Omega; so \Omega should just be \Zeta D\Zeta' , right? Minor: Why use such an uncommon notation for the inner product that is so confusing? How is the matrix of entry-wise ratio relevant (L246) and what do its entries correspond to? Also, what is R in 2.13? Limitations Yes
NIPS
Title SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients Abstract Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of Õ( −3) for finding an -stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam 1 Introduction In the paper, we consider solving the following stochastic optimization problem: min x∈X f(x) := Eξ∼D[f(x; ξ)], (1) where f(x) denotes a smooth and possibly nonconvex loss function, and ξ is a random example variable following an unknown data distribution D. Here X = Rd or X ⊂ Rd is a compact and convex set. The problem (1) frequently appears in many machine learning applications such as the expectation loss minimization. Recently, Stochastic Gradient Descent (SGD) [14] is commonly used to solve the problem (1) such as Deep Neural Networks (DNNs) training [18, 20], due to only requiring a mini-batch samples or even one sample at each iteration. Adaptive gradient methods are one of the most important variants of SGD, which use adaptive learning rates and possibly incorporate momentum techniques, so they generally require less parameter tuning and enjoy faster convergence rate than SGD. Meanwhile, compared to SGD, adaptive gradient methods escape saddle points faster [31]. Thus, recently adaptive gradient methods have been widely developed and studied. For example, the first adaptive gradient method i.e., Adagrad has been proposed in [12], which significantly outperforms the vanilla SGD under the sparse gradient setting. Subsequently, some variants of Adagrad e.g., SC-Adagra [28] and SAdagrad [9] have been proposed for (strongly) convex optimization. Unfortunately, Adagrad has been found that it does not be well competent to the dense gradient setting and the nonconvex setting. To address this drawback, some other efficient variants of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Adagrad, e.g., Adadelta [37], Adam [22], have been presented by using exponential moving average instead of the arithmetic average. Adam [22] recently has been shown great successes in current machine learning problems, e.g., it is a default method of choice for training DNNs [17] and contrastive learning [7]. Unfortunately, Reddi et al. [29] still showed that Adam is frequently divergent in some settings where the gradient information quickly disappear. To deal with this issue, some variants of Adam algorithm, e.g., AMSGrad [29], YOGI [36] and generalized Adam [8] have been proposed. Specifically, AMSGrad [29] applies an extra ‘long term memory’ variable to preserve the past gradient information in order to handle the convergence issue of Adam. YOGI [36] introduces an adaptive denominator constant, and studies effect of the mini-batch size in its convergence. Subsequently, Chen et al. [8] studied the convergence of a class of Adam-type algorithms for nonconvex optimization. Zhou et al. [39] analyzed the convergence of a class of adaptive gradient algorithms for nonconvex optimization, and the result shows the advantage of adaptive gradient methods over SGD in sparse stochastic gradient setting. Meanwhile, Liu et al. [24] studied the variances of these adaptive algorithms. More recently, Guo et al. [19] presented a novel convergence analysis for a family of Adam-style methods (including Adam, AMSGrad, Adabound, etc.) with an increasing or large momentum parameter for the first-order moment. Although the above these adaptive gradient methods show some good empirical performances, their generalization performance is worse than SGD (with momentum) on many deep learning tasks due to using the coordinate-wise learning rates [35]. Thus, recently some adaptive gradient methods have been proposed to improve the generalization performance of Adam. For example, AdamW [26] and Padam [6] improve the generalization performance of Adam by decoupling weight decay regularization and introducing a partial adaptive parameter, respectively. Luo et al. [27] proposed a new variant of Adam (i.e., Adabound) by employing dynamic bounds on learning rates to improve the generalization performance. Subsequently, AdaBelief [40] has been presented to obtain a good generalization by adopting the stepsize according to the ‘belief’ in the current gradient direction. In addition, the norm version of AdaGrad (i.e., AdaGrad-Norm) [34] has been proposed to obtain a good generalization performance. So far, the above adaptive gradient methods still suffer from a high gradient complexity of O( −4) for finding -stationary point in the worst case without considering sparsity of gradient. More recently, some faster variance-reduced adaptive gradient methods such as STORM [11], Adaptive Normalized SGD [10], Adam+ [25] have been proposed. For example, STORM applies the momentum-based variance reduced technique to obtain a lower gradient complexity of Õ( −3). To the best of our knowledge, all these existing adaptive gradient methods only use some specific adaptive learning rates with focusing on either pure theoretical or empirical aspects. Thus, it is desired to design a universal framework for the adaptive gradient methods on both theoretical analysis and practical algorithms to solve the generic problems. To fill this gap, in the paper, we propose a faster and universal framework of adaptive gradients, i.e., SUPER-ADAM algorithm, by introducing a universal adaptive matrix. Moreover, we provide a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting based on the mirror descent algorithm [5, 15]. In summary, our main contributions are threefold: 1) We propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradients. Moreover, our framework can flexibly integrate the momentum and variance-reduced techniques. 2) We provide a novel convergence analysis framework for the adaptive gradient methods in the nonconvex setting under the milder conditions (Please see Table 1). 3) We apply a momentum-based variance reduced gradient estimator [11, 32] to our algorithm (SUPER-ADAM (τ = 1)), which makes our algorithm reach a faster convergence rate than the classic adaptive methods. Specifically, under smoothness of each component function f(x; ξ), we prove that the SUPER-ADAM (τ = 1) achieves the best known gradient complexity of Õ( −3) for finding an -stationary point of the problem (1), which matches the lower bound for stochastic smooth nonconvex optimization [1]. Under smoothness of the function f(x), we prove that the SUPER-ADAM (τ = 0) achieves a gradient complexity of Õ( −4). 2 Preliminaries 2.1 Notations ‖ · ‖ denotes the `2 norm for vectors and spectral norm for matrices, respectively. Id denotes a d-dimensional identity matrix. diag(a) ∈ Rd denotes a diagonal matrix with diagonal entries a = (a1, · · · , ad). For vectors u and v, up (p > 0) denotes element-wise power operation, u/v denotes element-wise division and max(u, v) denotes element-wise maximum. 〈u, v〉 denotes the inner product of two vectors u and v. For two sequences {an} and {bn}, we write an = O(bn) if there exists a positive constant C such that an ≤ Cbn, and Õ(·) hides logarithmic factors. A 0( 0) denotes a positive (semi)definite matrix. δmin(A) and δmax(A) denote the smallest and largest eigenvalues of the matrix A, respectively. 2.2 Adaptive Gradient Algorithms In the subsection, we review some existing typical adaptive gradient methods. Recently, many adaptive algorithms have been proposed to solve the problem (1), and achieve good performances. For example, Adagrad [12] is the first adaptive gradient method with adaptive learning rate for each individual dimension, which adopts the following update form: xt+1 = xt − ηtgt/ √ vt, (2) where gt = ∇f(xt; ξt) and vt = 1t ∑t j=1 g 2 j , and ηt = η√ t with η > 0 is the step size. In fact, ηt only is the basic learning rate that is the same for all coordinates of variable xt, while ηt√vt,i is the effective learning rate for the i-th coordinate of xt, which changes across the coordinates. Adam [22] is one of the most popular exponential moving average variant of Adagrad, which combines the exponential moving average technique with momentum acceleration. Its update form is: mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (3) where α1, α2 ∈ (0, 1) and ε > 0, and ηt = η√t with η > 0. However, Reddi et al. [29] found a divergence issue of the Adam algorithm, and proposed a modified version of Adam (i.e., Amsgrad), which adopts a new step instead of the debiasing step in (3) to ensure the decay of the effective learning rate, defined as v̂t = max(v̂t−1, vt), xt+1 = xt − ηtmt/ √ v̂t. (4) Algorithm 1 SUPER-ADAM Algorithm 1: Input: Total iteration T , and tuning parameters {µt, αt}Tt=1, γ > 0 ; 2: Initialize: x1 ∈ X , sample one point ξ1 and compute g1 = ∇f(x1; ξ1); 3: for t = 1, 2, . . . , T do 4: Generate an adaptive matrix Ht ∈ Rd×d; // Given two examples to update Ht: 5: Case 1: given β ∈ (0, 1), λ > 0 and v0 = 0, 6: vt = βvt−1 + (1− β)∇f(xt; ξt)2, Ht = diag( √ vt + λ); 7: Case 2: given β ∈ (0, 1), λ > 0 and b0 = 0, 8: bt = βbt−1 + (1− β)‖∇f(xt; ξt)‖, Ht = ( bt + λ ) Id; 9: Update x̃t+1 = arg minx∈X { 〈gt, x〉+ 12γ (x− xt) THt(x− xt) } ; 10: Update xt+1 = (1− µt)xt + µtx̃t+1; 11: Sample one point ξt+1, and compute gt+1 = αt+1∇f(xt+1; ξt+1) + (1 − αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , where τ ∈ {0, 1}; 12: end for 13: Output: (for theoretical) xζ chosen uniformly random from {xt}Tt=1; (for practical ) xT . Due to using the coordinate-wise learning rates, these adaptive gradient methods frequently have worse generalization performance than SGD (with momentum) [35]. To improve the generalization performance of Adam, AdamW [26] uses a decoupled weight decay regularization, defined as mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηt ( αm̂t/( √ v̂t + ε) + λxt ) , (5) where α1, α2 ∈ (0, 1), α > 0, λ > 0 and ε > 0. More recently, to further improve generalization performance, AdaBelief [40] adopts a stepsize according to ‘belief’ in the current gradient direction, mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt)−mt)2 + ε m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (6) where α1, α2 ∈ (0, 1), and ηt = η√t with η > 0, and ε > 0. At the same time, to improve generalization performance, recently some effective adaptive gradient methods [34, 23, 11] have been proposed with adopting the global adaptive learning rates instead of coordinate-wise counterparts. For example, AdaGrad-Norm [34] applies a global adaptive learning rate to the following update form, for all t ≥ 1 xt = xt−1 − η∇f(xt−1; ξt−1)/bt, b2t = b2t−1 + ‖∇f(xt−1; ξt−1)‖2, b0 > 0, (7) where η > 0. The adaptive-SGD [23] adopts a global adaptive learning rate, defined as for all t ≥ 1 ηt = k( ω + ∑t−1 i=1 ‖∇f(xi; ξi)‖2 )1/2+ε , xt+1 = xt − ηt∇f(xt; ξt), (8) where k > 0, ω > 0, and ε ≥ 0. Subsequently, STORM [11] not only uses a global adaptive learning rate but also adopts the variance-reduced technique in gradient estimator to accelerate algorithm, defined as for all t ≥ 1 ηt = k( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )1/3 , xt+1 = xt − ηtgt, (9) gt+1 = ∇f(xt+1; ξt+1) + (1− cη2t )(gt −∇f(xt; ξt+1)), where k > 0, ω > 0 and c > 0. 3 SUPER-ADAM Algorithm In the section, we propose a faster and universal framework of adaptive gradients (i.e., SUPERADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Specifically, our SUPER-ADAM algorithm is summarized in Algorithm 1. At the step 4 in Algorithm 1, we generate an adaptive matrix Ht based on stochastic gradient information, which can include both coordinate-wise and global learning rates. For example, Ht generated from the case 1 in Algorithm 1 is similar to the coordinate-wise adaptive learning rate used in Adam [22]. Ht generated from the case 2 in Algorithm 1 is similar to the global adaptive learning rate used in the AdaGrad-Norm [34] and Adaptive-SGD [23]. Moreover, we can obtain some new adaptive learning rates by generating some specific adaptive matrices. In the case 3, based on Barzilai-Borwein technique [2], we design a novel adaptive matrix Ht defined as: bt = |〈∇f(xt; ξt)−∇f(xt−1; ξt), xt − xt−1〉| ‖xt − xt−1‖2 , Ht = (bt + λ)Id, (10) where λ > 0. In the case 4, as the adaptive learning rate used in [40], we can generate a coordinatewise-type adaptive matrix Ht = diag( √ vt + λ) and a global-type adaptive matrix Ht = (bt + λ)Id, respectively, defined as: mt = β1mt−1 + (1− β1)∇f(xt; ξt), vt = β2vt−1 + (1− β2)(∇f(xt; ξt)−mt)2, bt = β2bt−1 + (1− β2)‖∇f(xt; ξt)−mt‖, (11) where β1, β2 ∈ (0, 1) and λ > 0. In fact, the adaptive matrix Ht can be given in a generic form Ht = At + λId, where the matrix At includes the adaptive information that is generated from stochastic gradients with noises, and the tuning parameter λ > 0 balances these adaptive information with noises. At the step 9 in Algorithm 1, we use a generalized gradient descent (i.e., mirror descent) iteration [5, 3, 15] to update x based on the adaptive matrix Ht, defined as x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 2γ (x− xt)THt(x− xt) } (12) = arg min x∈X { f(xt) + 〈gt, x− xt〉+ 1 2γ (x− xt)THt(x− xt) } , (13) where γ > 0 is a constant stepsize. In the above subproblem (13), we can omit the constant terms f(xt) and 〈gt, xt〉. For the subproblem (13), the first two terms of its objective function is a linear function approximated the function f(x) based on the stochastic gradient gt, and the last term can be seen as a Bregman distance between x and xt based on the Bregman function wt(x) = 1 2x THtx. At the step 10 in Algorithm 1, we use momentum update to obtain a weighted solution xt+1 = (1 − µt)xt + µtx̃t+1, where µt ∈ (0, 1] ensures xt+1 ∈ X . When X = Rd, the step 9 is equivalent to x̃t+1 = xt − γH−1t gt. Then by the step 10, we have xt+1 = (1− µt)xt + µtx̃t+1 = xt − γµtH−1t gt. (14) Under this case, γµt is a basic stepsize as ηt in the formula (3) of Adam algorithm, and H−1t is an adaptive stepsize as 1√ v̂t in the formula (3) of Adam algorithm. At the step 11 of Algorithm 1, we use the stochastic gradient estimator gt+1 for all t ≥ 1: gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , (15) where τ ∈ {0, 1} and αt+1 ∈ (0, 1] for all t ≥ 1. When τ = 1, we have gt+1 = ∇f(xt+1; ξt+1) + (1− αt+1) ( gt −∇f(xt; ξt+1) ) for all t ≥ 1, which is a momentum-based variance reduced gradient estimator used in STORM [11]. When τ = 0, we have gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1)gt for all t ≥ 1, which is a basic momentum gradient estimator used in the Adam algorithm [22]. 4 Theoretical Analysis In this section, we study the convergence properties of our algorithm (SUPER-ADAM) under some mild conditions. All detailed proofs are in the supplementary materials. 4.1 Some Mild Assumptions Assumption 1. Variance of unbiased stochastic gradient is bounded, i.e., there exists a constant σ > 0 such that for all x ∈ X , it follows E[∇f(x; ξ)] = ∇f(x) and E‖∇f(x; ξ)−∇f(x)‖2 ≤ σ2. Assumption 2. The function f(x) is bounded from below in X , i.e., f∗ = infx∈X f(x). Assumption 3. Assume the adaptive matrix Ht for all t ≥ 1 satisfies Ht ρId 0, and ρ > 0 denotes a lower bound of the smallest eigenvalue of Ht for all t ≥ 1. Assumption 1 is commonly used in stochastic optimization [15, 11]. Assumption 2 ensures the feasibility of the problem (1). In fact, all adaptive algorithms in Table 1 require these mild Assumptions 1 and 2. Assumption 3 guarantees that the adaptive matrices {Ht}t≥1 are positive definite and their smallest eigenvalues have a lower bound ρ > 0. From the above adaptive matrices {Ht}t≥1 given in our SUPER-ADAM algorithm, we have ρ ≥ λ > 0. In fact, many existing adaptive algorithms also implicitly use Assumption 3. For example, Zaheer et al. [36] and Zhuang et al. [40] used the following iteration form to update the variable x: xt+1 = xt−ηt mt√vt+ε for all t ≥ 0 and ε > 0, which is equivalent to xt+1 = xt − ηtH−1t mt with Ht = diag( √ vt + ε). Clearly, we have Ht εId 0. Ward et al. [34] applied a global adaptive learning rate to the update form in (7), which is equivalent to the following form: xt = xt−1 − ηH−1t ∇f(xt−1; ξt−1) with Ht = btId. By the above (7), we have Ht · · · H0 = b0Id 0. Li et al. [23] and Cutkosky et al. [11] applied a global adaptive learning rate to the update forms in (8) and (9), which is equivalent to xt+1 = xt −H−1t gt, where Ht = (1/ηt)Id and ηt = k/ ( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )α with k > 0, ω > 0, α ∈ (0, 1). By the above (8) and (9), we have Ht · · · H0 = (ωα/k)Id 0. Reddi et al. [29] and Chen et al. [6] used the condition v̂t = max(v̂t−1, vt), and let Ht = diag( √ v̂t), thus we have Ht · · · H1 = diag( √ v̂1) = √ 1− α2diag(|∇f(x1; ξ1)|) 0. Without loss of generality, choosing an initial point x1 and let (∇f(x1; ξ1))j 6= 0 for all j ∈ [d], we have Ht · · · H1 0. Interestingly, our SUPER-ADAM algorithm includes a class of novel momentum-based quasi-Newton algorithms by generating an approximated Hessian matrix Ht. In fact, the quasi-Newton algorithms [33, 16, 38] generally require the bounded approximated Hessian matrices, i.e., κ̂Id Ht κ̄Id 0 for all t ≥ 1, where κ̂ ≥ κ̄ > 0. Thus Assumption 3 is reasonable and mild. Due to Assumption 3, our convergence analysis can be easily applied to the stochastic quasi-Newton algorithms. 4.2 A Useful Convergence Measure We provide a useful measure to analyze the convergence of our algorithm, defined as Mt = 1 ρ ‖∇f(xt)− gt‖+ 1 γ ‖x̃t+1 − xt‖. (16) We define a Bregman distance [4, 5, 15] associated with function wt(x) = 12x THtx as follows Vt(x, xt) = wt(x)− [ wt(xt) + 〈∇wt(xt), x− xt〉 ] = 1 2 (x− xt)THt(x− xt). (17) Thus, the step 9 of Algorithm 1 is equivalent to the following mirror descent iteration: x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 γ Vt(x, xt) } . (18) As in [15], we define a gradient mapping GX (xt,∇f(xt), γ) = 1γ (xt − x + t+1), where x+t+1 = arg min x∈X { 〈∇f(xt), x〉+ 1 γ Vt(x, xt) } . (19) Let GX (xt, gt, γ) = 1γ (xt − x̃t+1). According to Proposition 1 in [15], we have ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖ ≤ 1ρ‖∇f(xt) − gt‖. Since ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖, we have ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + 1 ρ‖∇f(xt) − gt‖ = 1 γ ‖xt − x̃t+1‖ + 1 ρ‖∇f(xt) − gt‖ = Mt. When Mt → 0, we can obtain ‖GX (xt,∇f(xt), γ)‖ → 0, where xt is a stationary point or local minimum of the problem (1) [15]. Clearly, our measure E[Mt] is tighter than the gradient mapping measure E‖GX (xt,∇f(xt), γ)‖. 4.3 Convergence Analysis of SUPER-ADAM (τ = 1) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 1) algorithm using the momentum-based variance reduced gradient estimator [11, 32]. Assumption 4. Each component function f(x; ξ) is L-smooth for all ξ ∈ D, i.e., ‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 4 is widely used in the variance-reduced algorithms [13, 11]. According to Assumption 4, we have ‖∇f(x)−∇f(y)‖ = ‖E[∇f(x; ξ)−∇f(y; ξ)]‖ ≤ E‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖ for all x, y ∈ X . Thus the function f(x) also is L-smooth. Theorem 1. In Algorithm 1, under the Assumptions (1,2,3,4), when X ⊂ Rd, and given τ = 1, µt = k (m+t)1/3 and αt+1 = cµ2t for all t ≥ 0, 0 < γ ≤ ρm1/3 4kL , 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Gm1/6 T 1/2 + 2 √ 2G T 1/3 , (20) where G = f(x1)−f ∗ kργ + m1/3σ2 8k2L2γ2 + k2c2σ2 4L2γ2 ln(m+ T ). Remark 1. Without loss of generality, let ρ = O(1), k = O(1), m = O(1), and γ = O(1), we have c = O(1) and G = O ( c2σ2 ln(m+ T ) ) = Õ(1). Thus, our algorithm has a convergence rate of Õ ( 1 T 1/3 ) . Let 1 T 1/3 ≤ , we have T ≥ −3. Since our algorithm only requires to compute two stochastic gradients at each iteration (e.g., only need to compute stochastic gradients∇f(xt+1; ξt+1) and ∇f(xt; ξt+1) to estimate gt+1), and needs T iterations. Thus, our SUPER-ADAM (τ = 1) has a gradient complexity of 2 · T = Õ( −3) for finding an -stationary point. Corollary 1. In Algorithm 1, under the above Assumptions (1,2,3,4), when X = Rd, and given τ = 1, µt = k(m+t)1/3 and αt+1 = cµ 2 t for all t ≥ 0, γ = ρm1/3 νkL (ν ≥ 4), 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2G′ T 1/2 + 2 √ 2G′ m1/6T 1/3 ) , (21) where G′ = νL(f(x1)− f∗) + ν 2σ2 8 + ν2k4c2σ2 4m1/3 ln(m+ T ). Remark 2. Under the same conditions in Theorem 1, based on the metric E‖∇f(x)‖, our SUPER-ADAM (τ = 1) still has a gradient complexity of Õ( −3). Interestingly, the right of the above inequality (21) includes a term max1≤t≤T ‖Ht‖ρ that can be seen as an upper bound of the condition number of adaptive matrices {Ht}Tt=1. When using Ht given in the above case 1, we have max1≤t≤T ‖Ht‖ρ ≤ G1+λ λ as in the existing adaptive gradient methods assuming the bounded stochastic gradient ‖∇f(x; ξ)‖∞ ≤ G1; When using Ht given in the above case 2, we have max1≤t≤T ‖Ht‖ρ ≤ G2+σ+λ λ as in the existing adaptive gradient methods assuming the bounded full gradient ‖∇f(x)‖ ≤ G2; When using Ht given in the above case 3, we have max1≤t≤T ‖Ht‖ ρ ≤ L+λ λ . When using Ht given in the above case 4, we have max1≤t≤T ‖Ht‖ ρ ≤ 2G1+λ λ or max1≤t≤T ‖Ht‖ρ ≤ 2(G2+σ)+λ λ . Note that we only study the gradient (sample) complexity of our algorithm in the worst case without considering some specific structures such as the sparsity of stochastic gradient. Since the adaptive matrix Ht can be given Ht = At + λId, we have max1≤t≤T ‖Ht‖ ρ = max1≤t≤T δmax(At)+λ min1≤t≤T δmin(At)+λ . Here we only can choose a proper tuning parameter λ to balance adaptive information with noises in At. To reduce max1≤t≤T ‖Ht‖ ρ , we can not increase λ, but should design the matrix At with a small condition number by some techniques, e.g., clipping [27]. 4.4 Convergence Analysis of SUPER-ADAM (τ = 0) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 0) algorithm using the basic momentum stochastic gradient estimator [22]. Assumption 5. The function f(x) = Eξ[f(x; ξ)] is L-smooth, i.e., ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 5 is widely used in adaptive algorithms [36, 8, 40], which is milder than Assumption 4. Theorem 2. In Algorithm 1, under the Assumptions (1,2,3,5), when X ⊂ Rd, and given τ = 0, µt = k (m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, 0 < γ ≤ ρm 1/2 8Lk , 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k 2, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Mm1/4 T 1/2 + 2 √ 2M T 1/4 , where M = f(x1)−f ∗ ργk + 2σ2 ργkL + 2mσ2 ργkL ln(m+ T ). Remark 3. Without loss of generality, let ρ = O(1), k = O(1), m = O(1) and γ = O(1), we have M = O ( σ2 ln(m + T ) ) = Õ(1). Thus, our algorithm has convergence rate of Õ ( 1 T 1/4 ) . Considering 1 T 1/4 ≤ , we have T ≥ −4. Since our algorithm requires to compute one stochastic gradient at each iteration, and needs T iterations. Thus, our SUPER-ADAM (τ = 0) has a gradient complexity of 1 · T = Õ( −4) for finding an -stationary point. Corollary 2. In Algorithm 1, under the above Assumptions (1,2,3,5), when X = Rd, and given τ = 0, µt = k(m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, γ = ρm1/2 νLk (ν ≥ 8), 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k2, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2M ′ T 1/2 + 2 √ 2M ′ m1/4T 1/4 ) , where M ′ = νL(f(x1)− f∗) + 2νσ2 + 2νmσ2 ln(m+ T ). Remark 4. Under the same conditions in Theorem 2, based on the metric E‖∇f(xt)‖, our SUPERADAM (τ = 0) still has a gradient complexity of Õ( −4) for finding an -stationary point. 5 Differences between Our Algorithm and Related Algorithms In this section, we show some significance differences between our algorithm and some related algorithms, i.e., STORM algorithm [11] and Adam-type algorithms [22, 29, 40]. Although our SUPER-ADAM (τ = 1) algorithm uses the same stochastic gradient estimator used in the STORM, there exist some significant differences: 1) Our algorithm focuses on both constrained and unconstrained optimizations, but STORM only focuses on unconstrained optimization. 2) In our algorithm, we introduce a weighted solution xt+1 at the step 10 by using momentum update. Under this case, our algorithm can easily incorporate various adaptive learning rates and variance reduced techniques. Specifically, we can flexibly use various adaptive learning rates and different stochastic gradient estimators gt at the step 9 of our algorithm. In fact, this is one of important novelties of our paper. However, the STORM only uses a simple gradient descent iteration with a specific monotonically decreasing adaptive learning rate. Similarly, although our SUPER-ADAM (τ = 0) algorithm uses the same stochastic gradient estimator used in these Adam-type algorithms, there exist some significant differences besides using different adaptive learning rates. These Adam-type algorithms use a decreasing learning rate ηt = η√t (Please see the above (3), (4) and (6)), while our algorithm only uses a constant learning rate γ besides an adaptive learning rate. Moreover, our algorithm introduces a weighted solution xt+1 at the step 10 with a decreasing parameter µt = k√m+t (Please see Theorem 2) and uses a decreasing parameter αt+1 = cµt in the gradient estimator, while these Adam-type algorithms only use a constant parameter α1 ∈ (0, 1) in their gradient estimators. Under this case, our algorithm uses these decreasing parameters µt and αt+1 to control the noises in our gradient estimator, so our algorithm does not require some additional assumptions such as the bounded (stochastic) gradient assumption in our convergence analysis for the constrained optimization. For example, when τ = 0, our gradient estimator is gt+1 = αt+1∇f(xt+1; ξt+1) + (1−αt+1)gt. Intuitively, with growing t, αt+1 = ck√m+t will become small, so the new noises added in our gradient estimator gt+1 will also become less. 6 Numerical Experiments In this section, we conduct some experiments to empirically evaluate our SUPER-ADAM algorithm on two deep learning tasks as in [25]: image classification on CIFAR-10, CIFAR-100 and ImageNet datasets and language modeling on Wiki-Text2 dataset. In the experiments, we compare our SUPER-ADAM algorithm against several state-of-the-art adaptive gradient algorithms, including: (1) SGD, (2) Adam [22], (3) Amsgrad [29], (4) AdaGrad-Norm [23], (5) Adam+ [25], (6) STORM [11] and (7) AdaBelief [40]. For our SUPER-ADAM algorithm, we consider τ = 1 and τ = 0, respectively. Without loss of generality, in the following experiments, we only use the case 1 in Algorithm 1 to generate adaptive matrix Ht and let λ = 0.0005. All experiments are run over a machine with Intel Xeon E5-2683 CPU and 4 Nvidia Tesla P40 GPUs. 6.1 Image Classification Task In the experiment, we conduct image classification task on CIFAR-10, CIFAR-100 and Image-Net datasets. We perform training over ResNet-18 [20] and VGG-19 [30] on CIFAR-10 and CIFAR-100 datasets, respectively. For all the optimizers, we set the batch size as 128 and trains for 200 epochs. For the learning rates and other hyper-parameters, we do grid search and report the best one for each optimizer. In Adam, Amsgrad and AdaBelief algorithms, we set the learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 17 for CIFAR-10 and 10 for CIFAR-100, respectively. In Adam+, we use the recommended tuning parameters in [25]. In STORM, the best result is obtained when w = 6, k = 10 and c = 100 for CIFAR-10, while w = 3, k = 10 and c = 100 for CIFAR-100. For our SUPER-ADAM algorithm, in both CIFAR-10 and CIFAR-100 datasets, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, and k = 1, m = 100, c = 20, γ = 0.001 when τ = 0. Note that although c > m 2/3 k2 (c > m1/2 k ) in our algorithm, we set αt = min(αt, 0.9) at the first several iterations. In our algorithm, µt = k(m+t)1/3 (µt = k (m+t)1/2 ) decreases as the number of iteration t increases, so αt+1 = cµ2t (αt+1 = cµt) will be less than 1 after the first several iterations. We train a ResNet-34 [20] on ImageNet dataset. For all the optimizers, we set the batch size as 256 and trains for 60 epochs. In Adam,Amsgrad and AdaBelief, we set learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 30. In Adam+, we set learning rate as 0.1. In STORM, the best result is obtained when k = 5, w = 100 and c = 10. For our algorithm, we set k = 1, m = 100, c = 40, γ = 0.01 when τ = 1, and k = 1, m = 100, c = 4, γ = 0.04 when τ = 0. Figures 1 and 2 show that both train and test errors and accuracy results of the CIFAR-10 and CIFAR-100 datasets, respectively. Our SUPER-ADAM algorithm consistently outperforms the other optimizers with a great margin, especially when we set τ = 1. When we set τ = 0, our SUPERADAM algorithm gets the comparable performances with Adam/AmsGrad. Figure 3 demonstrates the results of ImageNet by different optimizers over ResNet-34, which shows that our algorithm outperforms the other optimizers, especially set τ = 1 in our algorithm. Figure 4 shows that both the condition number of Ht and the `2 norm of full gradient (i.e.,‖∇f(xt)‖) decrease as the number of iteration increases. From these results, we find that since the condition number of Ht decreases as the number of iteration increases, so it must has an upper bound. Thus, these experimental results further demonstrate that the above convergence results in Corollaries 1 and 2 are reasonable. 6.2 Language Modeling Task In the experiment, we conduct language modeling task on the Wiki-Text2 dataset. Specifically, we train a 2-layer LSTM [21] and a 2- layer Transformer over the WiKi-Text2 dataset. For the LSTM, we use 650 dimensional word embeddings and 650 hidden units per-layer. Due to space limitation, we provide the experimental results for the transformer in the supplementary materials. In the experiment, we set the batch size as 20 and trains for 40 epochs with dropout rate 0.5. We also clip the gradients by norm 0.25 in case of the exploding gradient in LSTM. We also decrease the learning by 4 whenever the validation error increases. For the learning rate, we also do grid search and report the best one for each optimizer. In Adam and Amsgrad algorithms, we set the learning rate as 0.001 in LSTM In AdaGrad-Norm algorithm, the best learning rate is 40. In Adam+ algorithm, we use the learning rate 20. In AdaBelief algorithm, we set the learing rate 0.1. In STORM algorithm, we set w = 50, k = 10 and c = 100. In our SUPER-ADAM algorithm, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, while k = 1, m = 100, c = 20, γ = 0.01 when τ = 0. Figure 5 shows that both train and test perplexities (losses) for different optimizers. When τ = 1, our SUPER-ADAM algorithm outperforms all the other optimizers. When τ = 0, our SUPER-ADAM optimizer gets a comparable performance with the other Adam-type optimizers. 7 Conclusions In the paper, we proposed a novel faster and universal adaptive gradient framework (i.e., SUPERADAM) by introducing a universal adaptive matrix including most existing adaptive gradient forms. In particular, our algorithm can flexibly work with the momentum and variance reduced techniques. Moreover, we provided a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting. Experimental studies were conducted on both image classification and language modeling tasks, and all empirical results verify the superior performances of our algorithm. Acknowledgments and Disclosure of Funding This work was partially supported by NSF IIS 1845666, 1852606, 1838627, 1837956, 1956002, OIA 2040588.
1. What is the main contribution of the paper regarding adaptive gradients methods? 2. What are the concerns regarding the practicability of SUPER-Adam in Case 2? 3. Why do the authors not report the close relation between their convergence analysis and the previous work on STORM? 4. What are the issues with the learning rate decay strategy used in the experiments? 5. How does the reviewer feel about the fairness of the comparison between SUPER-Adam and other optimizing methods, including SGD?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a universal framework for adaptive gradients methods by introducing a universal adaptive matrix. It also offers the theoretical convergence analysis for this optimizing method, and proves that this method can achieve the complexity of O ( ϵ − 3 ) for finding an ϵ -stationary point. Review The main concern is about the practicability of SUPER-Adam in the Case 2 in Algorithm 1. It is known that the number of parameters of modern DNNs is typically more than 10 million, such as the number of parameters of VGG16 is ~134 million and that of ResNet18 is 33 million. Therefore, H t in SUPER-Adam in Case 2 will be at least 8 order of magnitude larger than that in H t in SUPER-Adam in Case 1 that is commonly used, so we need to scale the learning rate η t by a factor of > 10 million to maintain the comparable convergence rate with Adam. This overlarge factor may be impractical. Actually, from Section 6.1, we know the authors also did not set η so large. When M t → 0 , SUPER-Adam in the Case 1 cannot ensure ρ H t − 1 → I d in Eq. (17), and then ‖ ∇ f ( x t ) ‖ → 0 will not hold. In this case, Theorem 1 and Theorem 2 might be somewhat meaningless. Theorem 1 and Theorem 2 and their proofs are obviously followed the paper that proposed STORM with a little change, but the authors did not report this close relation. Therefore, the bright spot of the convergence analysis is discounted From the 1st and 5th subfigure of figure 2, we know the learning rate for SUPER-Adam decays by a factor (maybe 10) at the 70th and 100th epoch, respectively, which is not described in experimental settings. Moreover, this learning rate decay strategy seems to be not used for other optimizing methods except AdaBlief. Hence, the comparison may be unfair. Additionally, it is known that SGD-type methods commonly perform better for image classification than adaptive gradient methods, so I would like to see SGD will be also compared to SUPER-Adam.
NIPS
Title SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients Abstract Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of Õ( −3) for finding an -stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam 1 Introduction In the paper, we consider solving the following stochastic optimization problem: min x∈X f(x) := Eξ∼D[f(x; ξ)], (1) where f(x) denotes a smooth and possibly nonconvex loss function, and ξ is a random example variable following an unknown data distribution D. Here X = Rd or X ⊂ Rd is a compact and convex set. The problem (1) frequently appears in many machine learning applications such as the expectation loss minimization. Recently, Stochastic Gradient Descent (SGD) [14] is commonly used to solve the problem (1) such as Deep Neural Networks (DNNs) training [18, 20], due to only requiring a mini-batch samples or even one sample at each iteration. Adaptive gradient methods are one of the most important variants of SGD, which use adaptive learning rates and possibly incorporate momentum techniques, so they generally require less parameter tuning and enjoy faster convergence rate than SGD. Meanwhile, compared to SGD, adaptive gradient methods escape saddle points faster [31]. Thus, recently adaptive gradient methods have been widely developed and studied. For example, the first adaptive gradient method i.e., Adagrad has been proposed in [12], which significantly outperforms the vanilla SGD under the sparse gradient setting. Subsequently, some variants of Adagrad e.g., SC-Adagra [28] and SAdagrad [9] have been proposed for (strongly) convex optimization. Unfortunately, Adagrad has been found that it does not be well competent to the dense gradient setting and the nonconvex setting. To address this drawback, some other efficient variants of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Adagrad, e.g., Adadelta [37], Adam [22], have been presented by using exponential moving average instead of the arithmetic average. Adam [22] recently has been shown great successes in current machine learning problems, e.g., it is a default method of choice for training DNNs [17] and contrastive learning [7]. Unfortunately, Reddi et al. [29] still showed that Adam is frequently divergent in some settings where the gradient information quickly disappear. To deal with this issue, some variants of Adam algorithm, e.g., AMSGrad [29], YOGI [36] and generalized Adam [8] have been proposed. Specifically, AMSGrad [29] applies an extra ‘long term memory’ variable to preserve the past gradient information in order to handle the convergence issue of Adam. YOGI [36] introduces an adaptive denominator constant, and studies effect of the mini-batch size in its convergence. Subsequently, Chen et al. [8] studied the convergence of a class of Adam-type algorithms for nonconvex optimization. Zhou et al. [39] analyzed the convergence of a class of adaptive gradient algorithms for nonconvex optimization, and the result shows the advantage of adaptive gradient methods over SGD in sparse stochastic gradient setting. Meanwhile, Liu et al. [24] studied the variances of these adaptive algorithms. More recently, Guo et al. [19] presented a novel convergence analysis for a family of Adam-style methods (including Adam, AMSGrad, Adabound, etc.) with an increasing or large momentum parameter for the first-order moment. Although the above these adaptive gradient methods show some good empirical performances, their generalization performance is worse than SGD (with momentum) on many deep learning tasks due to using the coordinate-wise learning rates [35]. Thus, recently some adaptive gradient methods have been proposed to improve the generalization performance of Adam. For example, AdamW [26] and Padam [6] improve the generalization performance of Adam by decoupling weight decay regularization and introducing a partial adaptive parameter, respectively. Luo et al. [27] proposed a new variant of Adam (i.e., Adabound) by employing dynamic bounds on learning rates to improve the generalization performance. Subsequently, AdaBelief [40] has been presented to obtain a good generalization by adopting the stepsize according to the ‘belief’ in the current gradient direction. In addition, the norm version of AdaGrad (i.e., AdaGrad-Norm) [34] has been proposed to obtain a good generalization performance. So far, the above adaptive gradient methods still suffer from a high gradient complexity of O( −4) for finding -stationary point in the worst case without considering sparsity of gradient. More recently, some faster variance-reduced adaptive gradient methods such as STORM [11], Adaptive Normalized SGD [10], Adam+ [25] have been proposed. For example, STORM applies the momentum-based variance reduced technique to obtain a lower gradient complexity of Õ( −3). To the best of our knowledge, all these existing adaptive gradient methods only use some specific adaptive learning rates with focusing on either pure theoretical or empirical aspects. Thus, it is desired to design a universal framework for the adaptive gradient methods on both theoretical analysis and practical algorithms to solve the generic problems. To fill this gap, in the paper, we propose a faster and universal framework of adaptive gradients, i.e., SUPER-ADAM algorithm, by introducing a universal adaptive matrix. Moreover, we provide a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting based on the mirror descent algorithm [5, 15]. In summary, our main contributions are threefold: 1) We propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradients. Moreover, our framework can flexibly integrate the momentum and variance-reduced techniques. 2) We provide a novel convergence analysis framework for the adaptive gradient methods in the nonconvex setting under the milder conditions (Please see Table 1). 3) We apply a momentum-based variance reduced gradient estimator [11, 32] to our algorithm (SUPER-ADAM (τ = 1)), which makes our algorithm reach a faster convergence rate than the classic adaptive methods. Specifically, under smoothness of each component function f(x; ξ), we prove that the SUPER-ADAM (τ = 1) achieves the best known gradient complexity of Õ( −3) for finding an -stationary point of the problem (1), which matches the lower bound for stochastic smooth nonconvex optimization [1]. Under smoothness of the function f(x), we prove that the SUPER-ADAM (τ = 0) achieves a gradient complexity of Õ( −4). 2 Preliminaries 2.1 Notations ‖ · ‖ denotes the `2 norm for vectors and spectral norm for matrices, respectively. Id denotes a d-dimensional identity matrix. diag(a) ∈ Rd denotes a diagonal matrix with diagonal entries a = (a1, · · · , ad). For vectors u and v, up (p > 0) denotes element-wise power operation, u/v denotes element-wise division and max(u, v) denotes element-wise maximum. 〈u, v〉 denotes the inner product of two vectors u and v. For two sequences {an} and {bn}, we write an = O(bn) if there exists a positive constant C such that an ≤ Cbn, and Õ(·) hides logarithmic factors. A 0( 0) denotes a positive (semi)definite matrix. δmin(A) and δmax(A) denote the smallest and largest eigenvalues of the matrix A, respectively. 2.2 Adaptive Gradient Algorithms In the subsection, we review some existing typical adaptive gradient methods. Recently, many adaptive algorithms have been proposed to solve the problem (1), and achieve good performances. For example, Adagrad [12] is the first adaptive gradient method with adaptive learning rate for each individual dimension, which adopts the following update form: xt+1 = xt − ηtgt/ √ vt, (2) where gt = ∇f(xt; ξt) and vt = 1t ∑t j=1 g 2 j , and ηt = η√ t with η > 0 is the step size. In fact, ηt only is the basic learning rate that is the same for all coordinates of variable xt, while ηt√vt,i is the effective learning rate for the i-th coordinate of xt, which changes across the coordinates. Adam [22] is one of the most popular exponential moving average variant of Adagrad, which combines the exponential moving average technique with momentum acceleration. Its update form is: mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (3) where α1, α2 ∈ (0, 1) and ε > 0, and ηt = η√t with η > 0. However, Reddi et al. [29] found a divergence issue of the Adam algorithm, and proposed a modified version of Adam (i.e., Amsgrad), which adopts a new step instead of the debiasing step in (3) to ensure the decay of the effective learning rate, defined as v̂t = max(v̂t−1, vt), xt+1 = xt − ηtmt/ √ v̂t. (4) Algorithm 1 SUPER-ADAM Algorithm 1: Input: Total iteration T , and tuning parameters {µt, αt}Tt=1, γ > 0 ; 2: Initialize: x1 ∈ X , sample one point ξ1 and compute g1 = ∇f(x1; ξ1); 3: for t = 1, 2, . . . , T do 4: Generate an adaptive matrix Ht ∈ Rd×d; // Given two examples to update Ht: 5: Case 1: given β ∈ (0, 1), λ > 0 and v0 = 0, 6: vt = βvt−1 + (1− β)∇f(xt; ξt)2, Ht = diag( √ vt + λ); 7: Case 2: given β ∈ (0, 1), λ > 0 and b0 = 0, 8: bt = βbt−1 + (1− β)‖∇f(xt; ξt)‖, Ht = ( bt + λ ) Id; 9: Update x̃t+1 = arg minx∈X { 〈gt, x〉+ 12γ (x− xt) THt(x− xt) } ; 10: Update xt+1 = (1− µt)xt + µtx̃t+1; 11: Sample one point ξt+1, and compute gt+1 = αt+1∇f(xt+1; ξt+1) + (1 − αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , where τ ∈ {0, 1}; 12: end for 13: Output: (for theoretical) xζ chosen uniformly random from {xt}Tt=1; (for practical ) xT . Due to using the coordinate-wise learning rates, these adaptive gradient methods frequently have worse generalization performance than SGD (with momentum) [35]. To improve the generalization performance of Adam, AdamW [26] uses a decoupled weight decay regularization, defined as mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηt ( αm̂t/( √ v̂t + ε) + λxt ) , (5) where α1, α2 ∈ (0, 1), α > 0, λ > 0 and ε > 0. More recently, to further improve generalization performance, AdaBelief [40] adopts a stepsize according to ‘belief’ in the current gradient direction, mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt)−mt)2 + ε m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (6) where α1, α2 ∈ (0, 1), and ηt = η√t with η > 0, and ε > 0. At the same time, to improve generalization performance, recently some effective adaptive gradient methods [34, 23, 11] have been proposed with adopting the global adaptive learning rates instead of coordinate-wise counterparts. For example, AdaGrad-Norm [34] applies a global adaptive learning rate to the following update form, for all t ≥ 1 xt = xt−1 − η∇f(xt−1; ξt−1)/bt, b2t = b2t−1 + ‖∇f(xt−1; ξt−1)‖2, b0 > 0, (7) where η > 0. The adaptive-SGD [23] adopts a global adaptive learning rate, defined as for all t ≥ 1 ηt = k( ω + ∑t−1 i=1 ‖∇f(xi; ξi)‖2 )1/2+ε , xt+1 = xt − ηt∇f(xt; ξt), (8) where k > 0, ω > 0, and ε ≥ 0. Subsequently, STORM [11] not only uses a global adaptive learning rate but also adopts the variance-reduced technique in gradient estimator to accelerate algorithm, defined as for all t ≥ 1 ηt = k( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )1/3 , xt+1 = xt − ηtgt, (9) gt+1 = ∇f(xt+1; ξt+1) + (1− cη2t )(gt −∇f(xt; ξt+1)), where k > 0, ω > 0 and c > 0. 3 SUPER-ADAM Algorithm In the section, we propose a faster and universal framework of adaptive gradients (i.e., SUPERADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Specifically, our SUPER-ADAM algorithm is summarized in Algorithm 1. At the step 4 in Algorithm 1, we generate an adaptive matrix Ht based on stochastic gradient information, which can include both coordinate-wise and global learning rates. For example, Ht generated from the case 1 in Algorithm 1 is similar to the coordinate-wise adaptive learning rate used in Adam [22]. Ht generated from the case 2 in Algorithm 1 is similar to the global adaptive learning rate used in the AdaGrad-Norm [34] and Adaptive-SGD [23]. Moreover, we can obtain some new adaptive learning rates by generating some specific adaptive matrices. In the case 3, based on Barzilai-Borwein technique [2], we design a novel adaptive matrix Ht defined as: bt = |〈∇f(xt; ξt)−∇f(xt−1; ξt), xt − xt−1〉| ‖xt − xt−1‖2 , Ht = (bt + λ)Id, (10) where λ > 0. In the case 4, as the adaptive learning rate used in [40], we can generate a coordinatewise-type adaptive matrix Ht = diag( √ vt + λ) and a global-type adaptive matrix Ht = (bt + λ)Id, respectively, defined as: mt = β1mt−1 + (1− β1)∇f(xt; ξt), vt = β2vt−1 + (1− β2)(∇f(xt; ξt)−mt)2, bt = β2bt−1 + (1− β2)‖∇f(xt; ξt)−mt‖, (11) where β1, β2 ∈ (0, 1) and λ > 0. In fact, the adaptive matrix Ht can be given in a generic form Ht = At + λId, where the matrix At includes the adaptive information that is generated from stochastic gradients with noises, and the tuning parameter λ > 0 balances these adaptive information with noises. At the step 9 in Algorithm 1, we use a generalized gradient descent (i.e., mirror descent) iteration [5, 3, 15] to update x based on the adaptive matrix Ht, defined as x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 2γ (x− xt)THt(x− xt) } (12) = arg min x∈X { f(xt) + 〈gt, x− xt〉+ 1 2γ (x− xt)THt(x− xt) } , (13) where γ > 0 is a constant stepsize. In the above subproblem (13), we can omit the constant terms f(xt) and 〈gt, xt〉. For the subproblem (13), the first two terms of its objective function is a linear function approximated the function f(x) based on the stochastic gradient gt, and the last term can be seen as a Bregman distance between x and xt based on the Bregman function wt(x) = 1 2x THtx. At the step 10 in Algorithm 1, we use momentum update to obtain a weighted solution xt+1 = (1 − µt)xt + µtx̃t+1, where µt ∈ (0, 1] ensures xt+1 ∈ X . When X = Rd, the step 9 is equivalent to x̃t+1 = xt − γH−1t gt. Then by the step 10, we have xt+1 = (1− µt)xt + µtx̃t+1 = xt − γµtH−1t gt. (14) Under this case, γµt is a basic stepsize as ηt in the formula (3) of Adam algorithm, and H−1t is an adaptive stepsize as 1√ v̂t in the formula (3) of Adam algorithm. At the step 11 of Algorithm 1, we use the stochastic gradient estimator gt+1 for all t ≥ 1: gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , (15) where τ ∈ {0, 1} and αt+1 ∈ (0, 1] for all t ≥ 1. When τ = 1, we have gt+1 = ∇f(xt+1; ξt+1) + (1− αt+1) ( gt −∇f(xt; ξt+1) ) for all t ≥ 1, which is a momentum-based variance reduced gradient estimator used in STORM [11]. When τ = 0, we have gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1)gt for all t ≥ 1, which is a basic momentum gradient estimator used in the Adam algorithm [22]. 4 Theoretical Analysis In this section, we study the convergence properties of our algorithm (SUPER-ADAM) under some mild conditions. All detailed proofs are in the supplementary materials. 4.1 Some Mild Assumptions Assumption 1. Variance of unbiased stochastic gradient is bounded, i.e., there exists a constant σ > 0 such that for all x ∈ X , it follows E[∇f(x; ξ)] = ∇f(x) and E‖∇f(x; ξ)−∇f(x)‖2 ≤ σ2. Assumption 2. The function f(x) is bounded from below in X , i.e., f∗ = infx∈X f(x). Assumption 3. Assume the adaptive matrix Ht for all t ≥ 1 satisfies Ht ρId 0, and ρ > 0 denotes a lower bound of the smallest eigenvalue of Ht for all t ≥ 1. Assumption 1 is commonly used in stochastic optimization [15, 11]. Assumption 2 ensures the feasibility of the problem (1). In fact, all adaptive algorithms in Table 1 require these mild Assumptions 1 and 2. Assumption 3 guarantees that the adaptive matrices {Ht}t≥1 are positive definite and their smallest eigenvalues have a lower bound ρ > 0. From the above adaptive matrices {Ht}t≥1 given in our SUPER-ADAM algorithm, we have ρ ≥ λ > 0. In fact, many existing adaptive algorithms also implicitly use Assumption 3. For example, Zaheer et al. [36] and Zhuang et al. [40] used the following iteration form to update the variable x: xt+1 = xt−ηt mt√vt+ε for all t ≥ 0 and ε > 0, which is equivalent to xt+1 = xt − ηtH−1t mt with Ht = diag( √ vt + ε). Clearly, we have Ht εId 0. Ward et al. [34] applied a global adaptive learning rate to the update form in (7), which is equivalent to the following form: xt = xt−1 − ηH−1t ∇f(xt−1; ξt−1) with Ht = btId. By the above (7), we have Ht · · · H0 = b0Id 0. Li et al. [23] and Cutkosky et al. [11] applied a global adaptive learning rate to the update forms in (8) and (9), which is equivalent to xt+1 = xt −H−1t gt, where Ht = (1/ηt)Id and ηt = k/ ( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )α with k > 0, ω > 0, α ∈ (0, 1). By the above (8) and (9), we have Ht · · · H0 = (ωα/k)Id 0. Reddi et al. [29] and Chen et al. [6] used the condition v̂t = max(v̂t−1, vt), and let Ht = diag( √ v̂t), thus we have Ht · · · H1 = diag( √ v̂1) = √ 1− α2diag(|∇f(x1; ξ1)|) 0. Without loss of generality, choosing an initial point x1 and let (∇f(x1; ξ1))j 6= 0 for all j ∈ [d], we have Ht · · · H1 0. Interestingly, our SUPER-ADAM algorithm includes a class of novel momentum-based quasi-Newton algorithms by generating an approximated Hessian matrix Ht. In fact, the quasi-Newton algorithms [33, 16, 38] generally require the bounded approximated Hessian matrices, i.e., κ̂Id Ht κ̄Id 0 for all t ≥ 1, where κ̂ ≥ κ̄ > 0. Thus Assumption 3 is reasonable and mild. Due to Assumption 3, our convergence analysis can be easily applied to the stochastic quasi-Newton algorithms. 4.2 A Useful Convergence Measure We provide a useful measure to analyze the convergence of our algorithm, defined as Mt = 1 ρ ‖∇f(xt)− gt‖+ 1 γ ‖x̃t+1 − xt‖. (16) We define a Bregman distance [4, 5, 15] associated with function wt(x) = 12x THtx as follows Vt(x, xt) = wt(x)− [ wt(xt) + 〈∇wt(xt), x− xt〉 ] = 1 2 (x− xt)THt(x− xt). (17) Thus, the step 9 of Algorithm 1 is equivalent to the following mirror descent iteration: x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 γ Vt(x, xt) } . (18) As in [15], we define a gradient mapping GX (xt,∇f(xt), γ) = 1γ (xt − x + t+1), where x+t+1 = arg min x∈X { 〈∇f(xt), x〉+ 1 γ Vt(x, xt) } . (19) Let GX (xt, gt, γ) = 1γ (xt − x̃t+1). According to Proposition 1 in [15], we have ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖ ≤ 1ρ‖∇f(xt) − gt‖. Since ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖, we have ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + 1 ρ‖∇f(xt) − gt‖ = 1 γ ‖xt − x̃t+1‖ + 1 ρ‖∇f(xt) − gt‖ = Mt. When Mt → 0, we can obtain ‖GX (xt,∇f(xt), γ)‖ → 0, where xt is a stationary point or local minimum of the problem (1) [15]. Clearly, our measure E[Mt] is tighter than the gradient mapping measure E‖GX (xt,∇f(xt), γ)‖. 4.3 Convergence Analysis of SUPER-ADAM (τ = 1) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 1) algorithm using the momentum-based variance reduced gradient estimator [11, 32]. Assumption 4. Each component function f(x; ξ) is L-smooth for all ξ ∈ D, i.e., ‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 4 is widely used in the variance-reduced algorithms [13, 11]. According to Assumption 4, we have ‖∇f(x)−∇f(y)‖ = ‖E[∇f(x; ξ)−∇f(y; ξ)]‖ ≤ E‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖ for all x, y ∈ X . Thus the function f(x) also is L-smooth. Theorem 1. In Algorithm 1, under the Assumptions (1,2,3,4), when X ⊂ Rd, and given τ = 1, µt = k (m+t)1/3 and αt+1 = cµ2t for all t ≥ 0, 0 < γ ≤ ρm1/3 4kL , 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Gm1/6 T 1/2 + 2 √ 2G T 1/3 , (20) where G = f(x1)−f ∗ kργ + m1/3σ2 8k2L2γ2 + k2c2σ2 4L2γ2 ln(m+ T ). Remark 1. Without loss of generality, let ρ = O(1), k = O(1), m = O(1), and γ = O(1), we have c = O(1) and G = O ( c2σ2 ln(m+ T ) ) = Õ(1). Thus, our algorithm has a convergence rate of Õ ( 1 T 1/3 ) . Let 1 T 1/3 ≤ , we have T ≥ −3. Since our algorithm only requires to compute two stochastic gradients at each iteration (e.g., only need to compute stochastic gradients∇f(xt+1; ξt+1) and ∇f(xt; ξt+1) to estimate gt+1), and needs T iterations. Thus, our SUPER-ADAM (τ = 1) has a gradient complexity of 2 · T = Õ( −3) for finding an -stationary point. Corollary 1. In Algorithm 1, under the above Assumptions (1,2,3,4), when X = Rd, and given τ = 1, µt = k(m+t)1/3 and αt+1 = cµ 2 t for all t ≥ 0, γ = ρm1/3 νkL (ν ≥ 4), 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2G′ T 1/2 + 2 √ 2G′ m1/6T 1/3 ) , (21) where G′ = νL(f(x1)− f∗) + ν 2σ2 8 + ν2k4c2σ2 4m1/3 ln(m+ T ). Remark 2. Under the same conditions in Theorem 1, based on the metric E‖∇f(x)‖, our SUPER-ADAM (τ = 1) still has a gradient complexity of Õ( −3). Interestingly, the right of the above inequality (21) includes a term max1≤t≤T ‖Ht‖ρ that can be seen as an upper bound of the condition number of adaptive matrices {Ht}Tt=1. When using Ht given in the above case 1, we have max1≤t≤T ‖Ht‖ρ ≤ G1+λ λ as in the existing adaptive gradient methods assuming the bounded stochastic gradient ‖∇f(x; ξ)‖∞ ≤ G1; When using Ht given in the above case 2, we have max1≤t≤T ‖Ht‖ρ ≤ G2+σ+λ λ as in the existing adaptive gradient methods assuming the bounded full gradient ‖∇f(x)‖ ≤ G2; When using Ht given in the above case 3, we have max1≤t≤T ‖Ht‖ ρ ≤ L+λ λ . When using Ht given in the above case 4, we have max1≤t≤T ‖Ht‖ ρ ≤ 2G1+λ λ or max1≤t≤T ‖Ht‖ρ ≤ 2(G2+σ)+λ λ . Note that we only study the gradient (sample) complexity of our algorithm in the worst case without considering some specific structures such as the sparsity of stochastic gradient. Since the adaptive matrix Ht can be given Ht = At + λId, we have max1≤t≤T ‖Ht‖ ρ = max1≤t≤T δmax(At)+λ min1≤t≤T δmin(At)+λ . Here we only can choose a proper tuning parameter λ to balance adaptive information with noises in At. To reduce max1≤t≤T ‖Ht‖ ρ , we can not increase λ, but should design the matrix At with a small condition number by some techniques, e.g., clipping [27]. 4.4 Convergence Analysis of SUPER-ADAM (τ = 0) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 0) algorithm using the basic momentum stochastic gradient estimator [22]. Assumption 5. The function f(x) = Eξ[f(x; ξ)] is L-smooth, i.e., ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 5 is widely used in adaptive algorithms [36, 8, 40], which is milder than Assumption 4. Theorem 2. In Algorithm 1, under the Assumptions (1,2,3,5), when X ⊂ Rd, and given τ = 0, µt = k (m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, 0 < γ ≤ ρm 1/2 8Lk , 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k 2, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Mm1/4 T 1/2 + 2 √ 2M T 1/4 , where M = f(x1)−f ∗ ργk + 2σ2 ργkL + 2mσ2 ργkL ln(m+ T ). Remark 3. Without loss of generality, let ρ = O(1), k = O(1), m = O(1) and γ = O(1), we have M = O ( σ2 ln(m + T ) ) = Õ(1). Thus, our algorithm has convergence rate of Õ ( 1 T 1/4 ) . Considering 1 T 1/4 ≤ , we have T ≥ −4. Since our algorithm requires to compute one stochastic gradient at each iteration, and needs T iterations. Thus, our SUPER-ADAM (τ = 0) has a gradient complexity of 1 · T = Õ( −4) for finding an -stationary point. Corollary 2. In Algorithm 1, under the above Assumptions (1,2,3,5), when X = Rd, and given τ = 0, µt = k(m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, γ = ρm1/2 νLk (ν ≥ 8), 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k2, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2M ′ T 1/2 + 2 √ 2M ′ m1/4T 1/4 ) , where M ′ = νL(f(x1)− f∗) + 2νσ2 + 2νmσ2 ln(m+ T ). Remark 4. Under the same conditions in Theorem 2, based on the metric E‖∇f(xt)‖, our SUPERADAM (τ = 0) still has a gradient complexity of Õ( −4) for finding an -stationary point. 5 Differences between Our Algorithm and Related Algorithms In this section, we show some significance differences between our algorithm and some related algorithms, i.e., STORM algorithm [11] and Adam-type algorithms [22, 29, 40]. Although our SUPER-ADAM (τ = 1) algorithm uses the same stochastic gradient estimator used in the STORM, there exist some significant differences: 1) Our algorithm focuses on both constrained and unconstrained optimizations, but STORM only focuses on unconstrained optimization. 2) In our algorithm, we introduce a weighted solution xt+1 at the step 10 by using momentum update. Under this case, our algorithm can easily incorporate various adaptive learning rates and variance reduced techniques. Specifically, we can flexibly use various adaptive learning rates and different stochastic gradient estimators gt at the step 9 of our algorithm. In fact, this is one of important novelties of our paper. However, the STORM only uses a simple gradient descent iteration with a specific monotonically decreasing adaptive learning rate. Similarly, although our SUPER-ADAM (τ = 0) algorithm uses the same stochastic gradient estimator used in these Adam-type algorithms, there exist some significant differences besides using different adaptive learning rates. These Adam-type algorithms use a decreasing learning rate ηt = η√t (Please see the above (3), (4) and (6)), while our algorithm only uses a constant learning rate γ besides an adaptive learning rate. Moreover, our algorithm introduces a weighted solution xt+1 at the step 10 with a decreasing parameter µt = k√m+t (Please see Theorem 2) and uses a decreasing parameter αt+1 = cµt in the gradient estimator, while these Adam-type algorithms only use a constant parameter α1 ∈ (0, 1) in their gradient estimators. Under this case, our algorithm uses these decreasing parameters µt and αt+1 to control the noises in our gradient estimator, so our algorithm does not require some additional assumptions such as the bounded (stochastic) gradient assumption in our convergence analysis for the constrained optimization. For example, when τ = 0, our gradient estimator is gt+1 = αt+1∇f(xt+1; ξt+1) + (1−αt+1)gt. Intuitively, with growing t, αt+1 = ck√m+t will become small, so the new noises added in our gradient estimator gt+1 will also become less. 6 Numerical Experiments In this section, we conduct some experiments to empirically evaluate our SUPER-ADAM algorithm on two deep learning tasks as in [25]: image classification on CIFAR-10, CIFAR-100 and ImageNet datasets and language modeling on Wiki-Text2 dataset. In the experiments, we compare our SUPER-ADAM algorithm against several state-of-the-art adaptive gradient algorithms, including: (1) SGD, (2) Adam [22], (3) Amsgrad [29], (4) AdaGrad-Norm [23], (5) Adam+ [25], (6) STORM [11] and (7) AdaBelief [40]. For our SUPER-ADAM algorithm, we consider τ = 1 and τ = 0, respectively. Without loss of generality, in the following experiments, we only use the case 1 in Algorithm 1 to generate adaptive matrix Ht and let λ = 0.0005. All experiments are run over a machine with Intel Xeon E5-2683 CPU and 4 Nvidia Tesla P40 GPUs. 6.1 Image Classification Task In the experiment, we conduct image classification task on CIFAR-10, CIFAR-100 and Image-Net datasets. We perform training over ResNet-18 [20] and VGG-19 [30] on CIFAR-10 and CIFAR-100 datasets, respectively. For all the optimizers, we set the batch size as 128 and trains for 200 epochs. For the learning rates and other hyper-parameters, we do grid search and report the best one for each optimizer. In Adam, Amsgrad and AdaBelief algorithms, we set the learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 17 for CIFAR-10 and 10 for CIFAR-100, respectively. In Adam+, we use the recommended tuning parameters in [25]. In STORM, the best result is obtained when w = 6, k = 10 and c = 100 for CIFAR-10, while w = 3, k = 10 and c = 100 for CIFAR-100. For our SUPER-ADAM algorithm, in both CIFAR-10 and CIFAR-100 datasets, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, and k = 1, m = 100, c = 20, γ = 0.001 when τ = 0. Note that although c > m 2/3 k2 (c > m1/2 k ) in our algorithm, we set αt = min(αt, 0.9) at the first several iterations. In our algorithm, µt = k(m+t)1/3 (µt = k (m+t)1/2 ) decreases as the number of iteration t increases, so αt+1 = cµ2t (αt+1 = cµt) will be less than 1 after the first several iterations. We train a ResNet-34 [20] on ImageNet dataset. For all the optimizers, we set the batch size as 256 and trains for 60 epochs. In Adam,Amsgrad and AdaBelief, we set learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 30. In Adam+, we set learning rate as 0.1. In STORM, the best result is obtained when k = 5, w = 100 and c = 10. For our algorithm, we set k = 1, m = 100, c = 40, γ = 0.01 when τ = 1, and k = 1, m = 100, c = 4, γ = 0.04 when τ = 0. Figures 1 and 2 show that both train and test errors and accuracy results of the CIFAR-10 and CIFAR-100 datasets, respectively. Our SUPER-ADAM algorithm consistently outperforms the other optimizers with a great margin, especially when we set τ = 1. When we set τ = 0, our SUPERADAM algorithm gets the comparable performances with Adam/AmsGrad. Figure 3 demonstrates the results of ImageNet by different optimizers over ResNet-34, which shows that our algorithm outperforms the other optimizers, especially set τ = 1 in our algorithm. Figure 4 shows that both the condition number of Ht and the `2 norm of full gradient (i.e.,‖∇f(xt)‖) decrease as the number of iteration increases. From these results, we find that since the condition number of Ht decreases as the number of iteration increases, so it must has an upper bound. Thus, these experimental results further demonstrate that the above convergence results in Corollaries 1 and 2 are reasonable. 6.2 Language Modeling Task In the experiment, we conduct language modeling task on the Wiki-Text2 dataset. Specifically, we train a 2-layer LSTM [21] and a 2- layer Transformer over the WiKi-Text2 dataset. For the LSTM, we use 650 dimensional word embeddings and 650 hidden units per-layer. Due to space limitation, we provide the experimental results for the transformer in the supplementary materials. In the experiment, we set the batch size as 20 and trains for 40 epochs with dropout rate 0.5. We also clip the gradients by norm 0.25 in case of the exploding gradient in LSTM. We also decrease the learning by 4 whenever the validation error increases. For the learning rate, we also do grid search and report the best one for each optimizer. In Adam and Amsgrad algorithms, we set the learning rate as 0.001 in LSTM In AdaGrad-Norm algorithm, the best learning rate is 40. In Adam+ algorithm, we use the learning rate 20. In AdaBelief algorithm, we set the learing rate 0.1. In STORM algorithm, we set w = 50, k = 10 and c = 100. In our SUPER-ADAM algorithm, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, while k = 1, m = 100, c = 20, γ = 0.01 when τ = 0. Figure 5 shows that both train and test perplexities (losses) for different optimizers. When τ = 1, our SUPER-ADAM algorithm outperforms all the other optimizers. When τ = 0, our SUPER-ADAM optimizer gets a comparable performance with the other Adam-type optimizers. 7 Conclusions In the paper, we proposed a novel faster and universal adaptive gradient framework (i.e., SUPERADAM) by introducing a universal adaptive matrix including most existing adaptive gradient forms. In particular, our algorithm can flexibly work with the momentum and variance reduced techniques. Moreover, we provided a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting. Experimental studies were conducted on both image classification and language modeling tasks, and all empirical results verify the superior performances of our algorithm. Acknowledgments and Disclosure of Funding This work was partially supported by NSF IIS 1845666, 1852606, 1838627, 1837956, 1956002, OIA 2040588.
1. What is the main contribution of the paper regarding nonconvex optimization? 2. What are the strengths and weaknesses of the proposed unified framework, particularly in comparison to existing methods such as STORM and SGD? 3. Do you have any concerns or questions about the convergence measure used in the paper, especially in terms of its tightness and applicability in various scenarios? 4. How does the choice of hyperparameters, such as τ = 0 or τ = 1, affect the performance of the algorithm? 5. Can the authors provide more information on the technical innovations of their proof compared to previous works like STORM? 6. Are there any limitations or trade-offs in the experimental results, such as running time, that should be considered? 7. How does the reviewer assess the overall impact and novelty of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This manuscript introduces a unified framework (i.e., SUPER-ADAM) of adaptive gradient methods for nonconvex optimization. It is proved that a universal convergence measure converges with a fast rate, which recovers the O ( ϵ − 3 ) complexity by STORM. Another result is that it also recovers O ( ϵ − 4 ) result by SGD, when individual function does not have a Lipschitz-continuous gradient. Numerical experiments on image classification and language modeling shows that SUPER-ADAM has faster convergence rate when training deep neural networks. Review The unified framework is new to me, but the only algorithmic innovation is line 12 in Algorithm 1. The choices of τ = 0 or τ = 1 are nothing but hyper-parameters which make the algorithm use the standard gradient estimator or the STORM estimator. I have the following concerns. Convergence Measure. The authors claim that the proposed measure in (15) is tighter than existing measures. I agree that it is the case when we are facing unconstrained nonconvex optimization with non-coordinate-wise adaptive learning rate. What about constrained case and adaptive case in which the preconditioning matrix is not identity? I do not think it is still tighter. For example, the gradient mapping may not be a lower bound for the measure in (15). In unconstrained adaptive learning rate case, ρ H t − 1 should be very small when t gets large, so it is also not an upper bound of the gradient norm. For fair comparison, is it possible to prove the convergence measure in the usual sense (i.e., gradient mapping in constrained case and gradient norm in unconstrained adaptive case)? As I mentioned, line 12 is the only difference compared with existing methods. Why line 12 is important to recover the convergence rate of STORM? There is no doubt that in unconstrained case, without line 12 and with τ = 1 , the algorithm is the same as STORM and hence can recover O ( ϵ − 3 ) rate. Also I would like to ask the same question in terms of SGD. After inspecting Lemma 2 and Lemma 3 and Theorem 3 in this manuscript, I think the proofs are almost adapted from STORM paper, except for plugging in the Lemma 1 in [10] and the line 11 of Algorithm 1. Please clarify what are the technical innovations when compared with Lemma 1, Lemma 2 and Theorem 1 in the STORM paper. Experiments: For SUPER-ADAM with τ = 1 , since the algorithm need to calculate two stochastic gradients at one iteration, so the running time might be slow. Is it possible to report the running time results? From my point of view, the theoretical contribution is minimal for unconstrained problems, since SGD and STORM are already optimal with different assumptions, and SUPER-ADAM's algorithm design is very similar to them except for the line 12. The convergence measure when using coordinate-wise adaptive learning rate for unconstrained problems is not comparable with the usual gradient norm. SUPER-ADAM might be interesting in constrained case. However, the convergence measure is not comparable with gradient mapping, and there are no empirical studies for constrained problems in this submission. Due to these reasons, it is hard to say that the algorithm is universal.
NIPS
Title SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients Abstract Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of Õ( −3) for finding an -stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam 1 Introduction In the paper, we consider solving the following stochastic optimization problem: min x∈X f(x) := Eξ∼D[f(x; ξ)], (1) where f(x) denotes a smooth and possibly nonconvex loss function, and ξ is a random example variable following an unknown data distribution D. Here X = Rd or X ⊂ Rd is a compact and convex set. The problem (1) frequently appears in many machine learning applications such as the expectation loss minimization. Recently, Stochastic Gradient Descent (SGD) [14] is commonly used to solve the problem (1) such as Deep Neural Networks (DNNs) training [18, 20], due to only requiring a mini-batch samples or even one sample at each iteration. Adaptive gradient methods are one of the most important variants of SGD, which use adaptive learning rates and possibly incorporate momentum techniques, so they generally require less parameter tuning and enjoy faster convergence rate than SGD. Meanwhile, compared to SGD, adaptive gradient methods escape saddle points faster [31]. Thus, recently adaptive gradient methods have been widely developed and studied. For example, the first adaptive gradient method i.e., Adagrad has been proposed in [12], which significantly outperforms the vanilla SGD under the sparse gradient setting. Subsequently, some variants of Adagrad e.g., SC-Adagra [28] and SAdagrad [9] have been proposed for (strongly) convex optimization. Unfortunately, Adagrad has been found that it does not be well competent to the dense gradient setting and the nonconvex setting. To address this drawback, some other efficient variants of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Adagrad, e.g., Adadelta [37], Adam [22], have been presented by using exponential moving average instead of the arithmetic average. Adam [22] recently has been shown great successes in current machine learning problems, e.g., it is a default method of choice for training DNNs [17] and contrastive learning [7]. Unfortunately, Reddi et al. [29] still showed that Adam is frequently divergent in some settings where the gradient information quickly disappear. To deal with this issue, some variants of Adam algorithm, e.g., AMSGrad [29], YOGI [36] and generalized Adam [8] have been proposed. Specifically, AMSGrad [29] applies an extra ‘long term memory’ variable to preserve the past gradient information in order to handle the convergence issue of Adam. YOGI [36] introduces an adaptive denominator constant, and studies effect of the mini-batch size in its convergence. Subsequently, Chen et al. [8] studied the convergence of a class of Adam-type algorithms for nonconvex optimization. Zhou et al. [39] analyzed the convergence of a class of adaptive gradient algorithms for nonconvex optimization, and the result shows the advantage of adaptive gradient methods over SGD in sparse stochastic gradient setting. Meanwhile, Liu et al. [24] studied the variances of these adaptive algorithms. More recently, Guo et al. [19] presented a novel convergence analysis for a family of Adam-style methods (including Adam, AMSGrad, Adabound, etc.) with an increasing or large momentum parameter for the first-order moment. Although the above these adaptive gradient methods show some good empirical performances, their generalization performance is worse than SGD (with momentum) on many deep learning tasks due to using the coordinate-wise learning rates [35]. Thus, recently some adaptive gradient methods have been proposed to improve the generalization performance of Adam. For example, AdamW [26] and Padam [6] improve the generalization performance of Adam by decoupling weight decay regularization and introducing a partial adaptive parameter, respectively. Luo et al. [27] proposed a new variant of Adam (i.e., Adabound) by employing dynamic bounds on learning rates to improve the generalization performance. Subsequently, AdaBelief [40] has been presented to obtain a good generalization by adopting the stepsize according to the ‘belief’ in the current gradient direction. In addition, the norm version of AdaGrad (i.e., AdaGrad-Norm) [34] has been proposed to obtain a good generalization performance. So far, the above adaptive gradient methods still suffer from a high gradient complexity of O( −4) for finding -stationary point in the worst case without considering sparsity of gradient. More recently, some faster variance-reduced adaptive gradient methods such as STORM [11], Adaptive Normalized SGD [10], Adam+ [25] have been proposed. For example, STORM applies the momentum-based variance reduced technique to obtain a lower gradient complexity of Õ( −3). To the best of our knowledge, all these existing adaptive gradient methods only use some specific adaptive learning rates with focusing on either pure theoretical or empirical aspects. Thus, it is desired to design a universal framework for the adaptive gradient methods on both theoretical analysis and practical algorithms to solve the generic problems. To fill this gap, in the paper, we propose a faster and universal framework of adaptive gradients, i.e., SUPER-ADAM algorithm, by introducing a universal adaptive matrix. Moreover, we provide a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting based on the mirror descent algorithm [5, 15]. In summary, our main contributions are threefold: 1) We propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradients. Moreover, our framework can flexibly integrate the momentum and variance-reduced techniques. 2) We provide a novel convergence analysis framework for the adaptive gradient methods in the nonconvex setting under the milder conditions (Please see Table 1). 3) We apply a momentum-based variance reduced gradient estimator [11, 32] to our algorithm (SUPER-ADAM (τ = 1)), which makes our algorithm reach a faster convergence rate than the classic adaptive methods. Specifically, under smoothness of each component function f(x; ξ), we prove that the SUPER-ADAM (τ = 1) achieves the best known gradient complexity of Õ( −3) for finding an -stationary point of the problem (1), which matches the lower bound for stochastic smooth nonconvex optimization [1]. Under smoothness of the function f(x), we prove that the SUPER-ADAM (τ = 0) achieves a gradient complexity of Õ( −4). 2 Preliminaries 2.1 Notations ‖ · ‖ denotes the `2 norm for vectors and spectral norm for matrices, respectively. Id denotes a d-dimensional identity matrix. diag(a) ∈ Rd denotes a diagonal matrix with diagonal entries a = (a1, · · · , ad). For vectors u and v, up (p > 0) denotes element-wise power operation, u/v denotes element-wise division and max(u, v) denotes element-wise maximum. 〈u, v〉 denotes the inner product of two vectors u and v. For two sequences {an} and {bn}, we write an = O(bn) if there exists a positive constant C such that an ≤ Cbn, and Õ(·) hides logarithmic factors. A 0( 0) denotes a positive (semi)definite matrix. δmin(A) and δmax(A) denote the smallest and largest eigenvalues of the matrix A, respectively. 2.2 Adaptive Gradient Algorithms In the subsection, we review some existing typical adaptive gradient methods. Recently, many adaptive algorithms have been proposed to solve the problem (1), and achieve good performances. For example, Adagrad [12] is the first adaptive gradient method with adaptive learning rate for each individual dimension, which adopts the following update form: xt+1 = xt − ηtgt/ √ vt, (2) where gt = ∇f(xt; ξt) and vt = 1t ∑t j=1 g 2 j , and ηt = η√ t with η > 0 is the step size. In fact, ηt only is the basic learning rate that is the same for all coordinates of variable xt, while ηt√vt,i is the effective learning rate for the i-th coordinate of xt, which changes across the coordinates. Adam [22] is one of the most popular exponential moving average variant of Adagrad, which combines the exponential moving average technique with momentum acceleration. Its update form is: mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (3) where α1, α2 ∈ (0, 1) and ε > 0, and ηt = η√t with η > 0. However, Reddi et al. [29] found a divergence issue of the Adam algorithm, and proposed a modified version of Adam (i.e., Amsgrad), which adopts a new step instead of the debiasing step in (3) to ensure the decay of the effective learning rate, defined as v̂t = max(v̂t−1, vt), xt+1 = xt − ηtmt/ √ v̂t. (4) Algorithm 1 SUPER-ADAM Algorithm 1: Input: Total iteration T , and tuning parameters {µt, αt}Tt=1, γ > 0 ; 2: Initialize: x1 ∈ X , sample one point ξ1 and compute g1 = ∇f(x1; ξ1); 3: for t = 1, 2, . . . , T do 4: Generate an adaptive matrix Ht ∈ Rd×d; // Given two examples to update Ht: 5: Case 1: given β ∈ (0, 1), λ > 0 and v0 = 0, 6: vt = βvt−1 + (1− β)∇f(xt; ξt)2, Ht = diag( √ vt + λ); 7: Case 2: given β ∈ (0, 1), λ > 0 and b0 = 0, 8: bt = βbt−1 + (1− β)‖∇f(xt; ξt)‖, Ht = ( bt + λ ) Id; 9: Update x̃t+1 = arg minx∈X { 〈gt, x〉+ 12γ (x− xt) THt(x− xt) } ; 10: Update xt+1 = (1− µt)xt + µtx̃t+1; 11: Sample one point ξt+1, and compute gt+1 = αt+1∇f(xt+1; ξt+1) + (1 − αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , where τ ∈ {0, 1}; 12: end for 13: Output: (for theoretical) xζ chosen uniformly random from {xt}Tt=1; (for practical ) xT . Due to using the coordinate-wise learning rates, these adaptive gradient methods frequently have worse generalization performance than SGD (with momentum) [35]. To improve the generalization performance of Adam, AdamW [26] uses a decoupled weight decay regularization, defined as mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηt ( αm̂t/( √ v̂t + ε) + λxt ) , (5) where α1, α2 ∈ (0, 1), α > 0, λ > 0 and ε > 0. More recently, to further improve generalization performance, AdaBelief [40] adopts a stepsize according to ‘belief’ in the current gradient direction, mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt)−mt)2 + ε m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (6) where α1, α2 ∈ (0, 1), and ηt = η√t with η > 0, and ε > 0. At the same time, to improve generalization performance, recently some effective adaptive gradient methods [34, 23, 11] have been proposed with adopting the global adaptive learning rates instead of coordinate-wise counterparts. For example, AdaGrad-Norm [34] applies a global adaptive learning rate to the following update form, for all t ≥ 1 xt = xt−1 − η∇f(xt−1; ξt−1)/bt, b2t = b2t−1 + ‖∇f(xt−1; ξt−1)‖2, b0 > 0, (7) where η > 0. The adaptive-SGD [23] adopts a global adaptive learning rate, defined as for all t ≥ 1 ηt = k( ω + ∑t−1 i=1 ‖∇f(xi; ξi)‖2 )1/2+ε , xt+1 = xt − ηt∇f(xt; ξt), (8) where k > 0, ω > 0, and ε ≥ 0. Subsequently, STORM [11] not only uses a global adaptive learning rate but also adopts the variance-reduced technique in gradient estimator to accelerate algorithm, defined as for all t ≥ 1 ηt = k( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )1/3 , xt+1 = xt − ηtgt, (9) gt+1 = ∇f(xt+1; ξt+1) + (1− cη2t )(gt −∇f(xt; ξt+1)), where k > 0, ω > 0 and c > 0. 3 SUPER-ADAM Algorithm In the section, we propose a faster and universal framework of adaptive gradients (i.e., SUPERADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Specifically, our SUPER-ADAM algorithm is summarized in Algorithm 1. At the step 4 in Algorithm 1, we generate an adaptive matrix Ht based on stochastic gradient information, which can include both coordinate-wise and global learning rates. For example, Ht generated from the case 1 in Algorithm 1 is similar to the coordinate-wise adaptive learning rate used in Adam [22]. Ht generated from the case 2 in Algorithm 1 is similar to the global adaptive learning rate used in the AdaGrad-Norm [34] and Adaptive-SGD [23]. Moreover, we can obtain some new adaptive learning rates by generating some specific adaptive matrices. In the case 3, based on Barzilai-Borwein technique [2], we design a novel adaptive matrix Ht defined as: bt = |〈∇f(xt; ξt)−∇f(xt−1; ξt), xt − xt−1〉| ‖xt − xt−1‖2 , Ht = (bt + λ)Id, (10) where λ > 0. In the case 4, as the adaptive learning rate used in [40], we can generate a coordinatewise-type adaptive matrix Ht = diag( √ vt + λ) and a global-type adaptive matrix Ht = (bt + λ)Id, respectively, defined as: mt = β1mt−1 + (1− β1)∇f(xt; ξt), vt = β2vt−1 + (1− β2)(∇f(xt; ξt)−mt)2, bt = β2bt−1 + (1− β2)‖∇f(xt; ξt)−mt‖, (11) where β1, β2 ∈ (0, 1) and λ > 0. In fact, the adaptive matrix Ht can be given in a generic form Ht = At + λId, where the matrix At includes the adaptive information that is generated from stochastic gradients with noises, and the tuning parameter λ > 0 balances these adaptive information with noises. At the step 9 in Algorithm 1, we use a generalized gradient descent (i.e., mirror descent) iteration [5, 3, 15] to update x based on the adaptive matrix Ht, defined as x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 2γ (x− xt)THt(x− xt) } (12) = arg min x∈X { f(xt) + 〈gt, x− xt〉+ 1 2γ (x− xt)THt(x− xt) } , (13) where γ > 0 is a constant stepsize. In the above subproblem (13), we can omit the constant terms f(xt) and 〈gt, xt〉. For the subproblem (13), the first two terms of its objective function is a linear function approximated the function f(x) based on the stochastic gradient gt, and the last term can be seen as a Bregman distance between x and xt based on the Bregman function wt(x) = 1 2x THtx. At the step 10 in Algorithm 1, we use momentum update to obtain a weighted solution xt+1 = (1 − µt)xt + µtx̃t+1, where µt ∈ (0, 1] ensures xt+1 ∈ X . When X = Rd, the step 9 is equivalent to x̃t+1 = xt − γH−1t gt. Then by the step 10, we have xt+1 = (1− µt)xt + µtx̃t+1 = xt − γµtH−1t gt. (14) Under this case, γµt is a basic stepsize as ηt in the formula (3) of Adam algorithm, and H−1t is an adaptive stepsize as 1√ v̂t in the formula (3) of Adam algorithm. At the step 11 of Algorithm 1, we use the stochastic gradient estimator gt+1 for all t ≥ 1: gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , (15) where τ ∈ {0, 1} and αt+1 ∈ (0, 1] for all t ≥ 1. When τ = 1, we have gt+1 = ∇f(xt+1; ξt+1) + (1− αt+1) ( gt −∇f(xt; ξt+1) ) for all t ≥ 1, which is a momentum-based variance reduced gradient estimator used in STORM [11]. When τ = 0, we have gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1)gt for all t ≥ 1, which is a basic momentum gradient estimator used in the Adam algorithm [22]. 4 Theoretical Analysis In this section, we study the convergence properties of our algorithm (SUPER-ADAM) under some mild conditions. All detailed proofs are in the supplementary materials. 4.1 Some Mild Assumptions Assumption 1. Variance of unbiased stochastic gradient is bounded, i.e., there exists a constant σ > 0 such that for all x ∈ X , it follows E[∇f(x; ξ)] = ∇f(x) and E‖∇f(x; ξ)−∇f(x)‖2 ≤ σ2. Assumption 2. The function f(x) is bounded from below in X , i.e., f∗ = infx∈X f(x). Assumption 3. Assume the adaptive matrix Ht for all t ≥ 1 satisfies Ht ρId 0, and ρ > 0 denotes a lower bound of the smallest eigenvalue of Ht for all t ≥ 1. Assumption 1 is commonly used in stochastic optimization [15, 11]. Assumption 2 ensures the feasibility of the problem (1). In fact, all adaptive algorithms in Table 1 require these mild Assumptions 1 and 2. Assumption 3 guarantees that the adaptive matrices {Ht}t≥1 are positive definite and their smallest eigenvalues have a lower bound ρ > 0. From the above adaptive matrices {Ht}t≥1 given in our SUPER-ADAM algorithm, we have ρ ≥ λ > 0. In fact, many existing adaptive algorithms also implicitly use Assumption 3. For example, Zaheer et al. [36] and Zhuang et al. [40] used the following iteration form to update the variable x: xt+1 = xt−ηt mt√vt+ε for all t ≥ 0 and ε > 0, which is equivalent to xt+1 = xt − ηtH−1t mt with Ht = diag( √ vt + ε). Clearly, we have Ht εId 0. Ward et al. [34] applied a global adaptive learning rate to the update form in (7), which is equivalent to the following form: xt = xt−1 − ηH−1t ∇f(xt−1; ξt−1) with Ht = btId. By the above (7), we have Ht · · · H0 = b0Id 0. Li et al. [23] and Cutkosky et al. [11] applied a global adaptive learning rate to the update forms in (8) and (9), which is equivalent to xt+1 = xt −H−1t gt, where Ht = (1/ηt)Id and ηt = k/ ( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )α with k > 0, ω > 0, α ∈ (0, 1). By the above (8) and (9), we have Ht · · · H0 = (ωα/k)Id 0. Reddi et al. [29] and Chen et al. [6] used the condition v̂t = max(v̂t−1, vt), and let Ht = diag( √ v̂t), thus we have Ht · · · H1 = diag( √ v̂1) = √ 1− α2diag(|∇f(x1; ξ1)|) 0. Without loss of generality, choosing an initial point x1 and let (∇f(x1; ξ1))j 6= 0 for all j ∈ [d], we have Ht · · · H1 0. Interestingly, our SUPER-ADAM algorithm includes a class of novel momentum-based quasi-Newton algorithms by generating an approximated Hessian matrix Ht. In fact, the quasi-Newton algorithms [33, 16, 38] generally require the bounded approximated Hessian matrices, i.e., κ̂Id Ht κ̄Id 0 for all t ≥ 1, where κ̂ ≥ κ̄ > 0. Thus Assumption 3 is reasonable and mild. Due to Assumption 3, our convergence analysis can be easily applied to the stochastic quasi-Newton algorithms. 4.2 A Useful Convergence Measure We provide a useful measure to analyze the convergence of our algorithm, defined as Mt = 1 ρ ‖∇f(xt)− gt‖+ 1 γ ‖x̃t+1 − xt‖. (16) We define a Bregman distance [4, 5, 15] associated with function wt(x) = 12x THtx as follows Vt(x, xt) = wt(x)− [ wt(xt) + 〈∇wt(xt), x− xt〉 ] = 1 2 (x− xt)THt(x− xt). (17) Thus, the step 9 of Algorithm 1 is equivalent to the following mirror descent iteration: x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 γ Vt(x, xt) } . (18) As in [15], we define a gradient mapping GX (xt,∇f(xt), γ) = 1γ (xt − x + t+1), where x+t+1 = arg min x∈X { 〈∇f(xt), x〉+ 1 γ Vt(x, xt) } . (19) Let GX (xt, gt, γ) = 1γ (xt − x̃t+1). According to Proposition 1 in [15], we have ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖ ≤ 1ρ‖∇f(xt) − gt‖. Since ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖, we have ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + 1 ρ‖∇f(xt) − gt‖ = 1 γ ‖xt − x̃t+1‖ + 1 ρ‖∇f(xt) − gt‖ = Mt. When Mt → 0, we can obtain ‖GX (xt,∇f(xt), γ)‖ → 0, where xt is a stationary point or local minimum of the problem (1) [15]. Clearly, our measure E[Mt] is tighter than the gradient mapping measure E‖GX (xt,∇f(xt), γ)‖. 4.3 Convergence Analysis of SUPER-ADAM (τ = 1) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 1) algorithm using the momentum-based variance reduced gradient estimator [11, 32]. Assumption 4. Each component function f(x; ξ) is L-smooth for all ξ ∈ D, i.e., ‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 4 is widely used in the variance-reduced algorithms [13, 11]. According to Assumption 4, we have ‖∇f(x)−∇f(y)‖ = ‖E[∇f(x; ξ)−∇f(y; ξ)]‖ ≤ E‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖ for all x, y ∈ X . Thus the function f(x) also is L-smooth. Theorem 1. In Algorithm 1, under the Assumptions (1,2,3,4), when X ⊂ Rd, and given τ = 1, µt = k (m+t)1/3 and αt+1 = cµ2t for all t ≥ 0, 0 < γ ≤ ρm1/3 4kL , 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Gm1/6 T 1/2 + 2 √ 2G T 1/3 , (20) where G = f(x1)−f ∗ kργ + m1/3σ2 8k2L2γ2 + k2c2σ2 4L2γ2 ln(m+ T ). Remark 1. Without loss of generality, let ρ = O(1), k = O(1), m = O(1), and γ = O(1), we have c = O(1) and G = O ( c2σ2 ln(m+ T ) ) = Õ(1). Thus, our algorithm has a convergence rate of Õ ( 1 T 1/3 ) . Let 1 T 1/3 ≤ , we have T ≥ −3. Since our algorithm only requires to compute two stochastic gradients at each iteration (e.g., only need to compute stochastic gradients∇f(xt+1; ξt+1) and ∇f(xt; ξt+1) to estimate gt+1), and needs T iterations. Thus, our SUPER-ADAM (τ = 1) has a gradient complexity of 2 · T = Õ( −3) for finding an -stationary point. Corollary 1. In Algorithm 1, under the above Assumptions (1,2,3,4), when X = Rd, and given τ = 1, µt = k(m+t)1/3 and αt+1 = cµ 2 t for all t ≥ 0, γ = ρm1/3 νkL (ν ≥ 4), 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2G′ T 1/2 + 2 √ 2G′ m1/6T 1/3 ) , (21) where G′ = νL(f(x1)− f∗) + ν 2σ2 8 + ν2k4c2σ2 4m1/3 ln(m+ T ). Remark 2. Under the same conditions in Theorem 1, based on the metric E‖∇f(x)‖, our SUPER-ADAM (τ = 1) still has a gradient complexity of Õ( −3). Interestingly, the right of the above inequality (21) includes a term max1≤t≤T ‖Ht‖ρ that can be seen as an upper bound of the condition number of adaptive matrices {Ht}Tt=1. When using Ht given in the above case 1, we have max1≤t≤T ‖Ht‖ρ ≤ G1+λ λ as in the existing adaptive gradient methods assuming the bounded stochastic gradient ‖∇f(x; ξ)‖∞ ≤ G1; When using Ht given in the above case 2, we have max1≤t≤T ‖Ht‖ρ ≤ G2+σ+λ λ as in the existing adaptive gradient methods assuming the bounded full gradient ‖∇f(x)‖ ≤ G2; When using Ht given in the above case 3, we have max1≤t≤T ‖Ht‖ ρ ≤ L+λ λ . When using Ht given in the above case 4, we have max1≤t≤T ‖Ht‖ ρ ≤ 2G1+λ λ or max1≤t≤T ‖Ht‖ρ ≤ 2(G2+σ)+λ λ . Note that we only study the gradient (sample) complexity of our algorithm in the worst case without considering some specific structures such as the sparsity of stochastic gradient. Since the adaptive matrix Ht can be given Ht = At + λId, we have max1≤t≤T ‖Ht‖ ρ = max1≤t≤T δmax(At)+λ min1≤t≤T δmin(At)+λ . Here we only can choose a proper tuning parameter λ to balance adaptive information with noises in At. To reduce max1≤t≤T ‖Ht‖ ρ , we can not increase λ, but should design the matrix At with a small condition number by some techniques, e.g., clipping [27]. 4.4 Convergence Analysis of SUPER-ADAM (τ = 0) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 0) algorithm using the basic momentum stochastic gradient estimator [22]. Assumption 5. The function f(x) = Eξ[f(x; ξ)] is L-smooth, i.e., ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 5 is widely used in adaptive algorithms [36, 8, 40], which is milder than Assumption 4. Theorem 2. In Algorithm 1, under the Assumptions (1,2,3,5), when X ⊂ Rd, and given τ = 0, µt = k (m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, 0 < γ ≤ ρm 1/2 8Lk , 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k 2, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Mm1/4 T 1/2 + 2 √ 2M T 1/4 , where M = f(x1)−f ∗ ργk + 2σ2 ργkL + 2mσ2 ργkL ln(m+ T ). Remark 3. Without loss of generality, let ρ = O(1), k = O(1), m = O(1) and γ = O(1), we have M = O ( σ2 ln(m + T ) ) = Õ(1). Thus, our algorithm has convergence rate of Õ ( 1 T 1/4 ) . Considering 1 T 1/4 ≤ , we have T ≥ −4. Since our algorithm requires to compute one stochastic gradient at each iteration, and needs T iterations. Thus, our SUPER-ADAM (τ = 0) has a gradient complexity of 1 · T = Õ( −4) for finding an -stationary point. Corollary 2. In Algorithm 1, under the above Assumptions (1,2,3,5), when X = Rd, and given τ = 0, µt = k(m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, γ = ρm1/2 νLk (ν ≥ 8), 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k2, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2M ′ T 1/2 + 2 √ 2M ′ m1/4T 1/4 ) , where M ′ = νL(f(x1)− f∗) + 2νσ2 + 2νmσ2 ln(m+ T ). Remark 4. Under the same conditions in Theorem 2, based on the metric E‖∇f(xt)‖, our SUPERADAM (τ = 0) still has a gradient complexity of Õ( −4) for finding an -stationary point. 5 Differences between Our Algorithm and Related Algorithms In this section, we show some significance differences between our algorithm and some related algorithms, i.e., STORM algorithm [11] and Adam-type algorithms [22, 29, 40]. Although our SUPER-ADAM (τ = 1) algorithm uses the same stochastic gradient estimator used in the STORM, there exist some significant differences: 1) Our algorithm focuses on both constrained and unconstrained optimizations, but STORM only focuses on unconstrained optimization. 2) In our algorithm, we introduce a weighted solution xt+1 at the step 10 by using momentum update. Under this case, our algorithm can easily incorporate various adaptive learning rates and variance reduced techniques. Specifically, we can flexibly use various adaptive learning rates and different stochastic gradient estimators gt at the step 9 of our algorithm. In fact, this is one of important novelties of our paper. However, the STORM only uses a simple gradient descent iteration with a specific monotonically decreasing adaptive learning rate. Similarly, although our SUPER-ADAM (τ = 0) algorithm uses the same stochastic gradient estimator used in these Adam-type algorithms, there exist some significant differences besides using different adaptive learning rates. These Adam-type algorithms use a decreasing learning rate ηt = η√t (Please see the above (3), (4) and (6)), while our algorithm only uses a constant learning rate γ besides an adaptive learning rate. Moreover, our algorithm introduces a weighted solution xt+1 at the step 10 with a decreasing parameter µt = k√m+t (Please see Theorem 2) and uses a decreasing parameter αt+1 = cµt in the gradient estimator, while these Adam-type algorithms only use a constant parameter α1 ∈ (0, 1) in their gradient estimators. Under this case, our algorithm uses these decreasing parameters µt and αt+1 to control the noises in our gradient estimator, so our algorithm does not require some additional assumptions such as the bounded (stochastic) gradient assumption in our convergence analysis for the constrained optimization. For example, when τ = 0, our gradient estimator is gt+1 = αt+1∇f(xt+1; ξt+1) + (1−αt+1)gt. Intuitively, with growing t, αt+1 = ck√m+t will become small, so the new noises added in our gradient estimator gt+1 will also become less. 6 Numerical Experiments In this section, we conduct some experiments to empirically evaluate our SUPER-ADAM algorithm on two deep learning tasks as in [25]: image classification on CIFAR-10, CIFAR-100 and ImageNet datasets and language modeling on Wiki-Text2 dataset. In the experiments, we compare our SUPER-ADAM algorithm against several state-of-the-art adaptive gradient algorithms, including: (1) SGD, (2) Adam [22], (3) Amsgrad [29], (4) AdaGrad-Norm [23], (5) Adam+ [25], (6) STORM [11] and (7) AdaBelief [40]. For our SUPER-ADAM algorithm, we consider τ = 1 and τ = 0, respectively. Without loss of generality, in the following experiments, we only use the case 1 in Algorithm 1 to generate adaptive matrix Ht and let λ = 0.0005. All experiments are run over a machine with Intel Xeon E5-2683 CPU and 4 Nvidia Tesla P40 GPUs. 6.1 Image Classification Task In the experiment, we conduct image classification task on CIFAR-10, CIFAR-100 and Image-Net datasets. We perform training over ResNet-18 [20] and VGG-19 [30] on CIFAR-10 and CIFAR-100 datasets, respectively. For all the optimizers, we set the batch size as 128 and trains for 200 epochs. For the learning rates and other hyper-parameters, we do grid search and report the best one for each optimizer. In Adam, Amsgrad and AdaBelief algorithms, we set the learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 17 for CIFAR-10 and 10 for CIFAR-100, respectively. In Adam+, we use the recommended tuning parameters in [25]. In STORM, the best result is obtained when w = 6, k = 10 and c = 100 for CIFAR-10, while w = 3, k = 10 and c = 100 for CIFAR-100. For our SUPER-ADAM algorithm, in both CIFAR-10 and CIFAR-100 datasets, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, and k = 1, m = 100, c = 20, γ = 0.001 when τ = 0. Note that although c > m 2/3 k2 (c > m1/2 k ) in our algorithm, we set αt = min(αt, 0.9) at the first several iterations. In our algorithm, µt = k(m+t)1/3 (µt = k (m+t)1/2 ) decreases as the number of iteration t increases, so αt+1 = cµ2t (αt+1 = cµt) will be less than 1 after the first several iterations. We train a ResNet-34 [20] on ImageNet dataset. For all the optimizers, we set the batch size as 256 and trains for 60 epochs. In Adam,Amsgrad and AdaBelief, we set learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 30. In Adam+, we set learning rate as 0.1. In STORM, the best result is obtained when k = 5, w = 100 and c = 10. For our algorithm, we set k = 1, m = 100, c = 40, γ = 0.01 when τ = 1, and k = 1, m = 100, c = 4, γ = 0.04 when τ = 0. Figures 1 and 2 show that both train and test errors and accuracy results of the CIFAR-10 and CIFAR-100 datasets, respectively. Our SUPER-ADAM algorithm consistently outperforms the other optimizers with a great margin, especially when we set τ = 1. When we set τ = 0, our SUPERADAM algorithm gets the comparable performances with Adam/AmsGrad. Figure 3 demonstrates the results of ImageNet by different optimizers over ResNet-34, which shows that our algorithm outperforms the other optimizers, especially set τ = 1 in our algorithm. Figure 4 shows that both the condition number of Ht and the `2 norm of full gradient (i.e.,‖∇f(xt)‖) decrease as the number of iteration increases. From these results, we find that since the condition number of Ht decreases as the number of iteration increases, so it must has an upper bound. Thus, these experimental results further demonstrate that the above convergence results in Corollaries 1 and 2 are reasonable. 6.2 Language Modeling Task In the experiment, we conduct language modeling task on the Wiki-Text2 dataset. Specifically, we train a 2-layer LSTM [21] and a 2- layer Transformer over the WiKi-Text2 dataset. For the LSTM, we use 650 dimensional word embeddings and 650 hidden units per-layer. Due to space limitation, we provide the experimental results for the transformer in the supplementary materials. In the experiment, we set the batch size as 20 and trains for 40 epochs with dropout rate 0.5. We also clip the gradients by norm 0.25 in case of the exploding gradient in LSTM. We also decrease the learning by 4 whenever the validation error increases. For the learning rate, we also do grid search and report the best one for each optimizer. In Adam and Amsgrad algorithms, we set the learning rate as 0.001 in LSTM In AdaGrad-Norm algorithm, the best learning rate is 40. In Adam+ algorithm, we use the learning rate 20. In AdaBelief algorithm, we set the learing rate 0.1. In STORM algorithm, we set w = 50, k = 10 and c = 100. In our SUPER-ADAM algorithm, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, while k = 1, m = 100, c = 20, γ = 0.01 when τ = 0. Figure 5 shows that both train and test perplexities (losses) for different optimizers. When τ = 1, our SUPER-ADAM algorithm outperforms all the other optimizers. When τ = 0, our SUPER-ADAM optimizer gets a comparable performance with the other Adam-type optimizers. 7 Conclusions In the paper, we proposed a novel faster and universal adaptive gradient framework (i.e., SUPERADAM) by introducing a universal adaptive matrix including most existing adaptive gradient forms. In particular, our algorithm can flexibly work with the momentum and variance reduced techniques. Moreover, we provided a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting. Experimental studies were conducted on both image classification and language modeling tasks, and all empirical results verify the superior performances of our algorithm. Acknowledgments and Disclosure of Funding This work was partially supported by NSF IIS 1845666, 1852606, 1838627, 1837956, 1956002, OIA 2040588.
1. What is the focus and contribution of the paper on nonconvex stochastic optimization? 2. What are the strengths of the proposed super-adam algorithm, particularly in its adaptive matrix and integration of momentum and variance reduced techniques? 3. What are the weaknesses of the paper regarding typos and minor errors? 4. Do you have any suggestions for improving the paper, such as providing more details on the adaptive learning rates used in experiments? 5. How do you assess the overall quality and impact of the paper, and what is your final rating after considering the rebuttal?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a faster and universal framework of adaptive gradients (super-adam) for nonconvex stochastic optimization. Specifically, super-adam introduces a universal adaptive matrix that includes most existing adaptive gradient forms and many new adaptive learning rates. In particular, it can flexibly integrate the momentum and variance reduced techniques. Moreover, this paper provides an effective and interesting theoretical analysis framework based on a new convergence metric. Meanwhile, it provides the extensive experimental results to demonstrate the efficiency of the super-adam algorithm. Overall, it is an interesting and innovative paper. It will have a wide range of applications in machine learning. Review Some comments: 1. I recommend the authors to proofread the paper again. Some typos: In the line 9: “our framework can flexibly integrates …” should be “our framework can flexibly integrate…” In the line 115: “proposed to adapt the stepsize” should be “proposed to adopt the stepsize” At the step 13 of super-adam algorithm, there is a symbol error: “[g_t - ” should be “[ g_t + ”. In the line 436, the gradient estimator g_{t+1} is right. So I think that this symbol error at the step 13 of algorithm is a typo. In the line 145, “g_t - \nabla f(x_{t+1};\xi_{t+1})” should be “g_t - \nabla f(x_t;\xi_{t+1})”. 2. I also recommend the authors to detail the adaptive learning rates used in the super-adam algorithm in the experiment. ----After Rebuttal---- As the author's rebuttal well addressed my main concerns, I increased my score to 8 to support this paper. Thanks.
NIPS
Title SUPER-ADAM: Faster and Universal Framework of Adaptive Gradients Abstract Adaptive gradient methods have shown excellent performances for solving many machine learning problems. Although multiple adaptive gradient methods were recently studied, they mainly focus on either empirical or theoretical aspects and also only work for specific problems by using some specific adaptive learning rates. Thus, it is desired to design a universal framework for practical algorithms of adaptive gradients with theoretical guarantee to solve general problems. To fill this gap, we propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Moreover, our framework can flexibly integrate the momentum and variance reduced techniques. In particular, our novel framework provides the convergence analysis support for adaptive gradient methods under the nonconvex setting. In theoretical analysis, we prove that our SUPER-ADAM algorithm can achieve the best known gradient (i.e., stochastic first-order oracle (SFO)) complexity of Õ( −3) for finding an -stationary point of nonconvex optimization, which matches the lower bound for stochastic smooth nonconvex optimization. In numerical experiments, we employ various deep learning tasks to validate that our algorithm consistently outperforms the existing adaptive algorithms. Code is available at https://github.com/LIJUNYI95/SuperAdam 1 Introduction In the paper, we consider solving the following stochastic optimization problem: min x∈X f(x) := Eξ∼D[f(x; ξ)], (1) where f(x) denotes a smooth and possibly nonconvex loss function, and ξ is a random example variable following an unknown data distribution D. Here X = Rd or X ⊂ Rd is a compact and convex set. The problem (1) frequently appears in many machine learning applications such as the expectation loss minimization. Recently, Stochastic Gradient Descent (SGD) [14] is commonly used to solve the problem (1) such as Deep Neural Networks (DNNs) training [18, 20], due to only requiring a mini-batch samples or even one sample at each iteration. Adaptive gradient methods are one of the most important variants of SGD, which use adaptive learning rates and possibly incorporate momentum techniques, so they generally require less parameter tuning and enjoy faster convergence rate than SGD. Meanwhile, compared to SGD, adaptive gradient methods escape saddle points faster [31]. Thus, recently adaptive gradient methods have been widely developed and studied. For example, the first adaptive gradient method i.e., Adagrad has been proposed in [12], which significantly outperforms the vanilla SGD under the sparse gradient setting. Subsequently, some variants of Adagrad e.g., SC-Adagra [28] and SAdagrad [9] have been proposed for (strongly) convex optimization. Unfortunately, Adagrad has been found that it does not be well competent to the dense gradient setting and the nonconvex setting. To address this drawback, some other efficient variants of 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Adagrad, e.g., Adadelta [37], Adam [22], have been presented by using exponential moving average instead of the arithmetic average. Adam [22] recently has been shown great successes in current machine learning problems, e.g., it is a default method of choice for training DNNs [17] and contrastive learning [7]. Unfortunately, Reddi et al. [29] still showed that Adam is frequently divergent in some settings where the gradient information quickly disappear. To deal with this issue, some variants of Adam algorithm, e.g., AMSGrad [29], YOGI [36] and generalized Adam [8] have been proposed. Specifically, AMSGrad [29] applies an extra ‘long term memory’ variable to preserve the past gradient information in order to handle the convergence issue of Adam. YOGI [36] introduces an adaptive denominator constant, and studies effect of the mini-batch size in its convergence. Subsequently, Chen et al. [8] studied the convergence of a class of Adam-type algorithms for nonconvex optimization. Zhou et al. [39] analyzed the convergence of a class of adaptive gradient algorithms for nonconvex optimization, and the result shows the advantage of adaptive gradient methods over SGD in sparse stochastic gradient setting. Meanwhile, Liu et al. [24] studied the variances of these adaptive algorithms. More recently, Guo et al. [19] presented a novel convergence analysis for a family of Adam-style methods (including Adam, AMSGrad, Adabound, etc.) with an increasing or large momentum parameter for the first-order moment. Although the above these adaptive gradient methods show some good empirical performances, their generalization performance is worse than SGD (with momentum) on many deep learning tasks due to using the coordinate-wise learning rates [35]. Thus, recently some adaptive gradient methods have been proposed to improve the generalization performance of Adam. For example, AdamW [26] and Padam [6] improve the generalization performance of Adam by decoupling weight decay regularization and introducing a partial adaptive parameter, respectively. Luo et al. [27] proposed a new variant of Adam (i.e., Adabound) by employing dynamic bounds on learning rates to improve the generalization performance. Subsequently, AdaBelief [40] has been presented to obtain a good generalization by adopting the stepsize according to the ‘belief’ in the current gradient direction. In addition, the norm version of AdaGrad (i.e., AdaGrad-Norm) [34] has been proposed to obtain a good generalization performance. So far, the above adaptive gradient methods still suffer from a high gradient complexity of O( −4) for finding -stationary point in the worst case without considering sparsity of gradient. More recently, some faster variance-reduced adaptive gradient methods such as STORM [11], Adaptive Normalized SGD [10], Adam+ [25] have been proposed. For example, STORM applies the momentum-based variance reduced technique to obtain a lower gradient complexity of Õ( −3). To the best of our knowledge, all these existing adaptive gradient methods only use some specific adaptive learning rates with focusing on either pure theoretical or empirical aspects. Thus, it is desired to design a universal framework for the adaptive gradient methods on both theoretical analysis and practical algorithms to solve the generic problems. To fill this gap, in the paper, we propose a faster and universal framework of adaptive gradients, i.e., SUPER-ADAM algorithm, by introducing a universal adaptive matrix. Moreover, we provide a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting based on the mirror descent algorithm [5, 15]. In summary, our main contributions are threefold: 1) We propose a faster and universal framework of adaptive gradients (i.e., SUPER-ADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradients. Moreover, our framework can flexibly integrate the momentum and variance-reduced techniques. 2) We provide a novel convergence analysis framework for the adaptive gradient methods in the nonconvex setting under the milder conditions (Please see Table 1). 3) We apply a momentum-based variance reduced gradient estimator [11, 32] to our algorithm (SUPER-ADAM (τ = 1)), which makes our algorithm reach a faster convergence rate than the classic adaptive methods. Specifically, under smoothness of each component function f(x; ξ), we prove that the SUPER-ADAM (τ = 1) achieves the best known gradient complexity of Õ( −3) for finding an -stationary point of the problem (1), which matches the lower bound for stochastic smooth nonconvex optimization [1]. Under smoothness of the function f(x), we prove that the SUPER-ADAM (τ = 0) achieves a gradient complexity of Õ( −4). 2 Preliminaries 2.1 Notations ‖ · ‖ denotes the `2 norm for vectors and spectral norm for matrices, respectively. Id denotes a d-dimensional identity matrix. diag(a) ∈ Rd denotes a diagonal matrix with diagonal entries a = (a1, · · · , ad). For vectors u and v, up (p > 0) denotes element-wise power operation, u/v denotes element-wise division and max(u, v) denotes element-wise maximum. 〈u, v〉 denotes the inner product of two vectors u and v. For two sequences {an} and {bn}, we write an = O(bn) if there exists a positive constant C such that an ≤ Cbn, and Õ(·) hides logarithmic factors. A 0( 0) denotes a positive (semi)definite matrix. δmin(A) and δmax(A) denote the smallest and largest eigenvalues of the matrix A, respectively. 2.2 Adaptive Gradient Algorithms In the subsection, we review some existing typical adaptive gradient methods. Recently, many adaptive algorithms have been proposed to solve the problem (1), and achieve good performances. For example, Adagrad [12] is the first adaptive gradient method with adaptive learning rate for each individual dimension, which adopts the following update form: xt+1 = xt − ηtgt/ √ vt, (2) where gt = ∇f(xt; ξt) and vt = 1t ∑t j=1 g 2 j , and ηt = η√ t with η > 0 is the step size. In fact, ηt only is the basic learning rate that is the same for all coordinates of variable xt, while ηt√vt,i is the effective learning rate for the i-th coordinate of xt, which changes across the coordinates. Adam [22] is one of the most popular exponential moving average variant of Adagrad, which combines the exponential moving average technique with momentum acceleration. Its update form is: mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (3) where α1, α2 ∈ (0, 1) and ε > 0, and ηt = η√t with η > 0. However, Reddi et al. [29] found a divergence issue of the Adam algorithm, and proposed a modified version of Adam (i.e., Amsgrad), which adopts a new step instead of the debiasing step in (3) to ensure the decay of the effective learning rate, defined as v̂t = max(v̂t−1, vt), xt+1 = xt − ηtmt/ √ v̂t. (4) Algorithm 1 SUPER-ADAM Algorithm 1: Input: Total iteration T , and tuning parameters {µt, αt}Tt=1, γ > 0 ; 2: Initialize: x1 ∈ X , sample one point ξ1 and compute g1 = ∇f(x1; ξ1); 3: for t = 1, 2, . . . , T do 4: Generate an adaptive matrix Ht ∈ Rd×d; // Given two examples to update Ht: 5: Case 1: given β ∈ (0, 1), λ > 0 and v0 = 0, 6: vt = βvt−1 + (1− β)∇f(xt; ξt)2, Ht = diag( √ vt + λ); 7: Case 2: given β ∈ (0, 1), λ > 0 and b0 = 0, 8: bt = βbt−1 + (1− β)‖∇f(xt; ξt)‖, Ht = ( bt + λ ) Id; 9: Update x̃t+1 = arg minx∈X { 〈gt, x〉+ 12γ (x− xt) THt(x− xt) } ; 10: Update xt+1 = (1− µt)xt + µtx̃t+1; 11: Sample one point ξt+1, and compute gt+1 = αt+1∇f(xt+1; ξt+1) + (1 − αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , where τ ∈ {0, 1}; 12: end for 13: Output: (for theoretical) xζ chosen uniformly random from {xt}Tt=1; (for practical ) xT . Due to using the coordinate-wise learning rates, these adaptive gradient methods frequently have worse generalization performance than SGD (with momentum) [35]. To improve the generalization performance of Adam, AdamW [26] uses a decoupled weight decay regularization, defined as mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt))2 m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηt ( αm̂t/( √ v̂t + ε) + λxt ) , (5) where α1, α2 ∈ (0, 1), α > 0, λ > 0 and ε > 0. More recently, to further improve generalization performance, AdaBelief [40] adopts a stepsize according to ‘belief’ in the current gradient direction, mt = α1mt−1 + (1− α1)∇f(xt; ξt), vt = α2vt−1 + (1− α2)(∇f(xt; ξt)−mt)2 + ε m̂t = mt/(1− αt1), v̂t = vt/(1− αt2), xt+1 = xt − ηtm̂t/( √ v̂t + ε), ∀ t ≥ 1 (6) where α1, α2 ∈ (0, 1), and ηt = η√t with η > 0, and ε > 0. At the same time, to improve generalization performance, recently some effective adaptive gradient methods [34, 23, 11] have been proposed with adopting the global adaptive learning rates instead of coordinate-wise counterparts. For example, AdaGrad-Norm [34] applies a global adaptive learning rate to the following update form, for all t ≥ 1 xt = xt−1 − η∇f(xt−1; ξt−1)/bt, b2t = b2t−1 + ‖∇f(xt−1; ξt−1)‖2, b0 > 0, (7) where η > 0. The adaptive-SGD [23] adopts a global adaptive learning rate, defined as for all t ≥ 1 ηt = k( ω + ∑t−1 i=1 ‖∇f(xi; ξi)‖2 )1/2+ε , xt+1 = xt − ηt∇f(xt; ξt), (8) where k > 0, ω > 0, and ε ≥ 0. Subsequently, STORM [11] not only uses a global adaptive learning rate but also adopts the variance-reduced technique in gradient estimator to accelerate algorithm, defined as for all t ≥ 1 ηt = k( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )1/3 , xt+1 = xt − ηtgt, (9) gt+1 = ∇f(xt+1; ξt+1) + (1− cη2t )(gt −∇f(xt; ξt+1)), where k > 0, ω > 0 and c > 0. 3 SUPER-ADAM Algorithm In the section, we propose a faster and universal framework of adaptive gradients (i.e., SUPERADAM) by introducing a universal adaptive matrix that includes most existing adaptive gradient forms. Specifically, our SUPER-ADAM algorithm is summarized in Algorithm 1. At the step 4 in Algorithm 1, we generate an adaptive matrix Ht based on stochastic gradient information, which can include both coordinate-wise and global learning rates. For example, Ht generated from the case 1 in Algorithm 1 is similar to the coordinate-wise adaptive learning rate used in Adam [22]. Ht generated from the case 2 in Algorithm 1 is similar to the global adaptive learning rate used in the AdaGrad-Norm [34] and Adaptive-SGD [23]. Moreover, we can obtain some new adaptive learning rates by generating some specific adaptive matrices. In the case 3, based on Barzilai-Borwein technique [2], we design a novel adaptive matrix Ht defined as: bt = |〈∇f(xt; ξt)−∇f(xt−1; ξt), xt − xt−1〉| ‖xt − xt−1‖2 , Ht = (bt + λ)Id, (10) where λ > 0. In the case 4, as the adaptive learning rate used in [40], we can generate a coordinatewise-type adaptive matrix Ht = diag( √ vt + λ) and a global-type adaptive matrix Ht = (bt + λ)Id, respectively, defined as: mt = β1mt−1 + (1− β1)∇f(xt; ξt), vt = β2vt−1 + (1− β2)(∇f(xt; ξt)−mt)2, bt = β2bt−1 + (1− β2)‖∇f(xt; ξt)−mt‖, (11) where β1, β2 ∈ (0, 1) and λ > 0. In fact, the adaptive matrix Ht can be given in a generic form Ht = At + λId, where the matrix At includes the adaptive information that is generated from stochastic gradients with noises, and the tuning parameter λ > 0 balances these adaptive information with noises. At the step 9 in Algorithm 1, we use a generalized gradient descent (i.e., mirror descent) iteration [5, 3, 15] to update x based on the adaptive matrix Ht, defined as x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 2γ (x− xt)THt(x− xt) } (12) = arg min x∈X { f(xt) + 〈gt, x− xt〉+ 1 2γ (x− xt)THt(x− xt) } , (13) where γ > 0 is a constant stepsize. In the above subproblem (13), we can omit the constant terms f(xt) and 〈gt, xt〉. For the subproblem (13), the first two terms of its objective function is a linear function approximated the function f(x) based on the stochastic gradient gt, and the last term can be seen as a Bregman distance between x and xt based on the Bregman function wt(x) = 1 2x THtx. At the step 10 in Algorithm 1, we use momentum update to obtain a weighted solution xt+1 = (1 − µt)xt + µtx̃t+1, where µt ∈ (0, 1] ensures xt+1 ∈ X . When X = Rd, the step 9 is equivalent to x̃t+1 = xt − γH−1t gt. Then by the step 10, we have xt+1 = (1− µt)xt + µtx̃t+1 = xt − γµtH−1t gt. (14) Under this case, γµt is a basic stepsize as ηt in the formula (3) of Adam algorithm, and H−1t is an adaptive stepsize as 1√ v̂t in the formula (3) of Adam algorithm. At the step 11 of Algorithm 1, we use the stochastic gradient estimator gt+1 for all t ≥ 1: gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1) [ gt + τ ( ∇f(xt+1; ξt+1)−∇f(xt; ξt+1) )] , (15) where τ ∈ {0, 1} and αt+1 ∈ (0, 1] for all t ≥ 1. When τ = 1, we have gt+1 = ∇f(xt+1; ξt+1) + (1− αt+1) ( gt −∇f(xt; ξt+1) ) for all t ≥ 1, which is a momentum-based variance reduced gradient estimator used in STORM [11]. When τ = 0, we have gt+1 = αt+1∇f(xt+1; ξt+1) + (1− αt+1)gt for all t ≥ 1, which is a basic momentum gradient estimator used in the Adam algorithm [22]. 4 Theoretical Analysis In this section, we study the convergence properties of our algorithm (SUPER-ADAM) under some mild conditions. All detailed proofs are in the supplementary materials. 4.1 Some Mild Assumptions Assumption 1. Variance of unbiased stochastic gradient is bounded, i.e., there exists a constant σ > 0 such that for all x ∈ X , it follows E[∇f(x; ξ)] = ∇f(x) and E‖∇f(x; ξ)−∇f(x)‖2 ≤ σ2. Assumption 2. The function f(x) is bounded from below in X , i.e., f∗ = infx∈X f(x). Assumption 3. Assume the adaptive matrix Ht for all t ≥ 1 satisfies Ht ρId 0, and ρ > 0 denotes a lower bound of the smallest eigenvalue of Ht for all t ≥ 1. Assumption 1 is commonly used in stochastic optimization [15, 11]. Assumption 2 ensures the feasibility of the problem (1). In fact, all adaptive algorithms in Table 1 require these mild Assumptions 1 and 2. Assumption 3 guarantees that the adaptive matrices {Ht}t≥1 are positive definite and their smallest eigenvalues have a lower bound ρ > 0. From the above adaptive matrices {Ht}t≥1 given in our SUPER-ADAM algorithm, we have ρ ≥ λ > 0. In fact, many existing adaptive algorithms also implicitly use Assumption 3. For example, Zaheer et al. [36] and Zhuang et al. [40] used the following iteration form to update the variable x: xt+1 = xt−ηt mt√vt+ε for all t ≥ 0 and ε > 0, which is equivalent to xt+1 = xt − ηtH−1t mt with Ht = diag( √ vt + ε). Clearly, we have Ht εId 0. Ward et al. [34] applied a global adaptive learning rate to the update form in (7), which is equivalent to the following form: xt = xt−1 − ηH−1t ∇f(xt−1; ξt−1) with Ht = btId. By the above (7), we have Ht · · · H0 = b0Id 0. Li et al. [23] and Cutkosky et al. [11] applied a global adaptive learning rate to the update forms in (8) and (9), which is equivalent to xt+1 = xt −H−1t gt, where Ht = (1/ηt)Id and ηt = k/ ( ω + ∑t i=1 ‖∇f(xi; ξi)‖2 )α with k > 0, ω > 0, α ∈ (0, 1). By the above (8) and (9), we have Ht · · · H0 = (ωα/k)Id 0. Reddi et al. [29] and Chen et al. [6] used the condition v̂t = max(v̂t−1, vt), and let Ht = diag( √ v̂t), thus we have Ht · · · H1 = diag( √ v̂1) = √ 1− α2diag(|∇f(x1; ξ1)|) 0. Without loss of generality, choosing an initial point x1 and let (∇f(x1; ξ1))j 6= 0 for all j ∈ [d], we have Ht · · · H1 0. Interestingly, our SUPER-ADAM algorithm includes a class of novel momentum-based quasi-Newton algorithms by generating an approximated Hessian matrix Ht. In fact, the quasi-Newton algorithms [33, 16, 38] generally require the bounded approximated Hessian matrices, i.e., κ̂Id Ht κ̄Id 0 for all t ≥ 1, where κ̂ ≥ κ̄ > 0. Thus Assumption 3 is reasonable and mild. Due to Assumption 3, our convergence analysis can be easily applied to the stochastic quasi-Newton algorithms. 4.2 A Useful Convergence Measure We provide a useful measure to analyze the convergence of our algorithm, defined as Mt = 1 ρ ‖∇f(xt)− gt‖+ 1 γ ‖x̃t+1 − xt‖. (16) We define a Bregman distance [4, 5, 15] associated with function wt(x) = 12x THtx as follows Vt(x, xt) = wt(x)− [ wt(xt) + 〈∇wt(xt), x− xt〉 ] = 1 2 (x− xt)THt(x− xt). (17) Thus, the step 9 of Algorithm 1 is equivalent to the following mirror descent iteration: x̃t+1 = arg min x∈X { 〈gt, x〉+ 1 γ Vt(x, xt) } . (18) As in [15], we define a gradient mapping GX (xt,∇f(xt), γ) = 1γ (xt − x + t+1), where x+t+1 = arg min x∈X { 〈∇f(xt), x〉+ 1 γ Vt(x, xt) } . (19) Let GX (xt, gt, γ) = 1γ (xt − x̃t+1). According to Proposition 1 in [15], we have ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖ ≤ 1ρ‖∇f(xt) − gt‖. Since ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + ‖GX (xt, gt, γ) − GX (xt,∇f(xt), γ)‖, we have ‖GX (xt,∇f(xt), γ)‖ ≤ ‖GX (xt, gt, γ)‖ + 1 ρ‖∇f(xt) − gt‖ = 1 γ ‖xt − x̃t+1‖ + 1 ρ‖∇f(xt) − gt‖ = Mt. When Mt → 0, we can obtain ‖GX (xt,∇f(xt), γ)‖ → 0, where xt is a stationary point or local minimum of the problem (1) [15]. Clearly, our measure E[Mt] is tighter than the gradient mapping measure E‖GX (xt,∇f(xt), γ)‖. 4.3 Convergence Analysis of SUPER-ADAM (τ = 1) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 1) algorithm using the momentum-based variance reduced gradient estimator [11, 32]. Assumption 4. Each component function f(x; ξ) is L-smooth for all ξ ∈ D, i.e., ‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 4 is widely used in the variance-reduced algorithms [13, 11]. According to Assumption 4, we have ‖∇f(x)−∇f(y)‖ = ‖E[∇f(x; ξ)−∇f(y; ξ)]‖ ≤ E‖∇f(x; ξ)−∇f(y; ξ)‖ ≤ L‖x− y‖ for all x, y ∈ X . Thus the function f(x) also is L-smooth. Theorem 1. In Algorithm 1, under the Assumptions (1,2,3,4), when X ⊂ Rd, and given τ = 1, µt = k (m+t)1/3 and αt+1 = cµ2t for all t ≥ 0, 0 < γ ≤ ρm1/3 4kL , 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Gm1/6 T 1/2 + 2 √ 2G T 1/3 , (20) where G = f(x1)−f ∗ kργ + m1/3σ2 8k2L2γ2 + k2c2σ2 4L2γ2 ln(m+ T ). Remark 1. Without loss of generality, let ρ = O(1), k = O(1), m = O(1), and γ = O(1), we have c = O(1) and G = O ( c2σ2 ln(m+ T ) ) = Õ(1). Thus, our algorithm has a convergence rate of Õ ( 1 T 1/3 ) . Let 1 T 1/3 ≤ , we have T ≥ −3. Since our algorithm only requires to compute two stochastic gradients at each iteration (e.g., only need to compute stochastic gradients∇f(xt+1; ξt+1) and ∇f(xt; ξt+1) to estimate gt+1), and needs T iterations. Thus, our SUPER-ADAM (τ = 1) has a gradient complexity of 2 · T = Õ( −3) for finding an -stationary point. Corollary 1. In Algorithm 1, under the above Assumptions (1,2,3,4), when X = Rd, and given τ = 1, µt = k(m+t)1/3 and αt+1 = cµ 2 t for all t ≥ 0, γ = ρm1/3 νkL (ν ≥ 4), 1 k3 + 10L2γ2 ρ2 ≤ c ≤ m2/3 k2 , m ≥ max ( 3 2 , k 3, 8 3/2 (3k)3/2 ) and k > 0, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2G′ T 1/2 + 2 √ 2G′ m1/6T 1/3 ) , (21) where G′ = νL(f(x1)− f∗) + ν 2σ2 8 + ν2k4c2σ2 4m1/3 ln(m+ T ). Remark 2. Under the same conditions in Theorem 1, based on the metric E‖∇f(x)‖, our SUPER-ADAM (τ = 1) still has a gradient complexity of Õ( −3). Interestingly, the right of the above inequality (21) includes a term max1≤t≤T ‖Ht‖ρ that can be seen as an upper bound of the condition number of adaptive matrices {Ht}Tt=1. When using Ht given in the above case 1, we have max1≤t≤T ‖Ht‖ρ ≤ G1+λ λ as in the existing adaptive gradient methods assuming the bounded stochastic gradient ‖∇f(x; ξ)‖∞ ≤ G1; When using Ht given in the above case 2, we have max1≤t≤T ‖Ht‖ρ ≤ G2+σ+λ λ as in the existing adaptive gradient methods assuming the bounded full gradient ‖∇f(x)‖ ≤ G2; When using Ht given in the above case 3, we have max1≤t≤T ‖Ht‖ ρ ≤ L+λ λ . When using Ht given in the above case 4, we have max1≤t≤T ‖Ht‖ ρ ≤ 2G1+λ λ or max1≤t≤T ‖Ht‖ρ ≤ 2(G2+σ)+λ λ . Note that we only study the gradient (sample) complexity of our algorithm in the worst case without considering some specific structures such as the sparsity of stochastic gradient. Since the adaptive matrix Ht can be given Ht = At + λId, we have max1≤t≤T ‖Ht‖ ρ = max1≤t≤T δmax(At)+λ min1≤t≤T δmin(At)+λ . Here we only can choose a proper tuning parameter λ to balance adaptive information with noises in At. To reduce max1≤t≤T ‖Ht‖ ρ , we can not increase λ, but should design the matrix At with a small condition number by some techniques, e.g., clipping [27]. 4.4 Convergence Analysis of SUPER-ADAM (τ = 0) In this subsection, we provide the convergence analysis of our SUPER-ADAM (τ = 0) algorithm using the basic momentum stochastic gradient estimator [22]. Assumption 5. The function f(x) = Eξ[f(x; ξ)] is L-smooth, i.e., ‖∇f(x)−∇f(y)‖ ≤ L‖x− y‖, ∀x, y ∈ X . Assumption 5 is widely used in adaptive algorithms [36, 8, 40], which is milder than Assumption 4. Theorem 2. In Algorithm 1, under the Assumptions (1,2,3,5), when X ⊂ Rd, and given τ = 0, µt = k (m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, 0 < γ ≤ ρm 1/2 8Lk , 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k 2, we have 1 T T∑ t=1 E‖GX (xt,∇f(xt), γ)‖ ≤ 1 T T∑ t=1 E [ Mt ] ≤ 2 √ 2Mm1/4 T 1/2 + 2 √ 2M T 1/4 , where M = f(x1)−f ∗ ργk + 2σ2 ργkL + 2mσ2 ργkL ln(m+ T ). Remark 3. Without loss of generality, let ρ = O(1), k = O(1), m = O(1) and γ = O(1), we have M = O ( σ2 ln(m + T ) ) = Õ(1). Thus, our algorithm has convergence rate of Õ ( 1 T 1/4 ) . Considering 1 T 1/4 ≤ , we have T ≥ −4. Since our algorithm requires to compute one stochastic gradient at each iteration, and needs T iterations. Thus, our SUPER-ADAM (τ = 0) has a gradient complexity of 1 · T = Õ( −4) for finding an -stationary point. Corollary 2. In Algorithm 1, under the above Assumptions (1,2,3,5), when X = Rd, and given τ = 0, µt = k(m+t)1/2 , αt+1 = cµt for all t ≥ 0, k > 0, γ = ρm1/2 νLk (ν ≥ 8), 8Lγ ρ ≤ c ≤ m1/2 k , and m ≥ k2, we have 1 T T∑ t=1 E‖∇f(xt)‖ ≤ max1≤t≤T ‖Ht‖ ρ ( 2 √ 2M ′ T 1/2 + 2 √ 2M ′ m1/4T 1/4 ) , where M ′ = νL(f(x1)− f∗) + 2νσ2 + 2νmσ2 ln(m+ T ). Remark 4. Under the same conditions in Theorem 2, based on the metric E‖∇f(xt)‖, our SUPERADAM (τ = 0) still has a gradient complexity of Õ( −4) for finding an -stationary point. 5 Differences between Our Algorithm and Related Algorithms In this section, we show some significance differences between our algorithm and some related algorithms, i.e., STORM algorithm [11] and Adam-type algorithms [22, 29, 40]. Although our SUPER-ADAM (τ = 1) algorithm uses the same stochastic gradient estimator used in the STORM, there exist some significant differences: 1) Our algorithm focuses on both constrained and unconstrained optimizations, but STORM only focuses on unconstrained optimization. 2) In our algorithm, we introduce a weighted solution xt+1 at the step 10 by using momentum update. Under this case, our algorithm can easily incorporate various adaptive learning rates and variance reduced techniques. Specifically, we can flexibly use various adaptive learning rates and different stochastic gradient estimators gt at the step 9 of our algorithm. In fact, this is one of important novelties of our paper. However, the STORM only uses a simple gradient descent iteration with a specific monotonically decreasing adaptive learning rate. Similarly, although our SUPER-ADAM (τ = 0) algorithm uses the same stochastic gradient estimator used in these Adam-type algorithms, there exist some significant differences besides using different adaptive learning rates. These Adam-type algorithms use a decreasing learning rate ηt = η√t (Please see the above (3), (4) and (6)), while our algorithm only uses a constant learning rate γ besides an adaptive learning rate. Moreover, our algorithm introduces a weighted solution xt+1 at the step 10 with a decreasing parameter µt = k√m+t (Please see Theorem 2) and uses a decreasing parameter αt+1 = cµt in the gradient estimator, while these Adam-type algorithms only use a constant parameter α1 ∈ (0, 1) in their gradient estimators. Under this case, our algorithm uses these decreasing parameters µt and αt+1 to control the noises in our gradient estimator, so our algorithm does not require some additional assumptions such as the bounded (stochastic) gradient assumption in our convergence analysis for the constrained optimization. For example, when τ = 0, our gradient estimator is gt+1 = αt+1∇f(xt+1; ξt+1) + (1−αt+1)gt. Intuitively, with growing t, αt+1 = ck√m+t will become small, so the new noises added in our gradient estimator gt+1 will also become less. 6 Numerical Experiments In this section, we conduct some experiments to empirically evaluate our SUPER-ADAM algorithm on two deep learning tasks as in [25]: image classification on CIFAR-10, CIFAR-100 and ImageNet datasets and language modeling on Wiki-Text2 dataset. In the experiments, we compare our SUPER-ADAM algorithm against several state-of-the-art adaptive gradient algorithms, including: (1) SGD, (2) Adam [22], (3) Amsgrad [29], (4) AdaGrad-Norm [23], (5) Adam+ [25], (6) STORM [11] and (7) AdaBelief [40]. For our SUPER-ADAM algorithm, we consider τ = 1 and τ = 0, respectively. Without loss of generality, in the following experiments, we only use the case 1 in Algorithm 1 to generate adaptive matrix Ht and let λ = 0.0005. All experiments are run over a machine with Intel Xeon E5-2683 CPU and 4 Nvidia Tesla P40 GPUs. 6.1 Image Classification Task In the experiment, we conduct image classification task on CIFAR-10, CIFAR-100 and Image-Net datasets. We perform training over ResNet-18 [20] and VGG-19 [30] on CIFAR-10 and CIFAR-100 datasets, respectively. For all the optimizers, we set the batch size as 128 and trains for 200 epochs. For the learning rates and other hyper-parameters, we do grid search and report the best one for each optimizer. In Adam, Amsgrad and AdaBelief algorithms, we set the learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 17 for CIFAR-10 and 10 for CIFAR-100, respectively. In Adam+, we use the recommended tuning parameters in [25]. In STORM, the best result is obtained when w = 6, k = 10 and c = 100 for CIFAR-10, while w = 3, k = 10 and c = 100 for CIFAR-100. For our SUPER-ADAM algorithm, in both CIFAR-10 and CIFAR-100 datasets, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, and k = 1, m = 100, c = 20, γ = 0.001 when τ = 0. Note that although c > m 2/3 k2 (c > m1/2 k ) in our algorithm, we set αt = min(αt, 0.9) at the first several iterations. In our algorithm, µt = k(m+t)1/3 (µt = k (m+t)1/2 ) decreases as the number of iteration t increases, so αt+1 = cµ2t (αt+1 = cµt) will be less than 1 after the first several iterations. We train a ResNet-34 [20] on ImageNet dataset. For all the optimizers, we set the batch size as 256 and trains for 60 epochs. In Adam,Amsgrad and AdaBelief, we set learning rate as 0.001. In AdaGrad-Norm, the best learning rate is 30. In Adam+, we set learning rate as 0.1. In STORM, the best result is obtained when k = 5, w = 100 and c = 10. For our algorithm, we set k = 1, m = 100, c = 40, γ = 0.01 when τ = 1, and k = 1, m = 100, c = 4, γ = 0.04 when τ = 0. Figures 1 and 2 show that both train and test errors and accuracy results of the CIFAR-10 and CIFAR-100 datasets, respectively. Our SUPER-ADAM algorithm consistently outperforms the other optimizers with a great margin, especially when we set τ = 1. When we set τ = 0, our SUPERADAM algorithm gets the comparable performances with Adam/AmsGrad. Figure 3 demonstrates the results of ImageNet by different optimizers over ResNet-34, which shows that our algorithm outperforms the other optimizers, especially set τ = 1 in our algorithm. Figure 4 shows that both the condition number of Ht and the `2 norm of full gradient (i.e.,‖∇f(xt)‖) decrease as the number of iteration increases. From these results, we find that since the condition number of Ht decreases as the number of iteration increases, so it must has an upper bound. Thus, these experimental results further demonstrate that the above convergence results in Corollaries 1 and 2 are reasonable. 6.2 Language Modeling Task In the experiment, we conduct language modeling task on the Wiki-Text2 dataset. Specifically, we train a 2-layer LSTM [21] and a 2- layer Transformer over the WiKi-Text2 dataset. For the LSTM, we use 650 dimensional word embeddings and 650 hidden units per-layer. Due to space limitation, we provide the experimental results for the transformer in the supplementary materials. In the experiment, we set the batch size as 20 and trains for 40 epochs with dropout rate 0.5. We also clip the gradients by norm 0.25 in case of the exploding gradient in LSTM. We also decrease the learning by 4 whenever the validation error increases. For the learning rate, we also do grid search and report the best one for each optimizer. In Adam and Amsgrad algorithms, we set the learning rate as 0.001 in LSTM In AdaGrad-Norm algorithm, the best learning rate is 40. In Adam+ algorithm, we use the learning rate 20. In AdaBelief algorithm, we set the learing rate 0.1. In STORM algorithm, we set w = 50, k = 10 and c = 100. In our SUPER-ADAM algorithm, we set k = 1, m = 100, c = 40, γ = 0.001 when τ = 1, while k = 1, m = 100, c = 20, γ = 0.01 when τ = 0. Figure 5 shows that both train and test perplexities (losses) for different optimizers. When τ = 1, our SUPER-ADAM algorithm outperforms all the other optimizers. When τ = 0, our SUPER-ADAM optimizer gets a comparable performance with the other Adam-type optimizers. 7 Conclusions In the paper, we proposed a novel faster and universal adaptive gradient framework (i.e., SUPERADAM) by introducing a universal adaptive matrix including most existing adaptive gradient forms. In particular, our algorithm can flexibly work with the momentum and variance reduced techniques. Moreover, we provided a novel convergence analysis framework for the adaptive gradient methods under the nonconvex setting. Experimental studies were conducted on both image classification and language modeling tasks, and all empirical results verify the superior performances of our algorithm. Acknowledgments and Disclosure of Funding This work was partially supported by NSF IIS 1845666, 1852606, 1838627, 1837956, 1956002, OIA 2040588.
1. What is the focus of the paper regarding the universal adaptive gradient framework? 2. What are the strengths of the proposed method, particularly its ability to incorporate momentum and variance reduction techniques? 3. What are the questions or concerns regarding the theoretical analysis, specifically the impact of the parameter ρ on the convergence properties? 4. How should one choose a suitable adaptive learning rate for the super-adam algorithm in experiments? 5. Are there any minor errors or typos in the paper that need correction?
Summary Of The Paper Review
Summary Of The Paper This paper presents a universal adaptive gradient framework (i.e., super-adam), which can flexibly incorporate the momentum and variance reduced techniques to effectively accelerate the convergence of the algorithm. Moreover, it provides a sound convergence analysis framework for the super-adam under the nonconvex setting. Experimental results on some deep learning tasks demonstrate the efficiency of the proposed super-adam algorithm. Super-adam may have broad applications in training deep learning models. Overall, I recommend accepting this paper. Review I have the following major comments/questions: In the theoretical analysis, the parameter ρ is very important. Could you summarize the impact of this parameter on the convergence analysis? E.g., how it affects the convergence properties of the proposed algorithm? In the super-adam algorithm, we can choose flexible adaptive learning rates. Could you give us a guide on how to choose a good adaptive learning rate in the experiment? Minor comments: In Abstract, the line 9: “…can flexibly integrates…” should be “… can flexibly integrate…”; In the convergence analysis, the gradient estimator g t + 1 is right (in the line 436). At step 13 of the super-adam algorithm, there exists a symbol error: g t − should be g t + . In the experiment, do you use the same adaptive learning rates in super-adam ( τ = 1 ) and super-adam ( τ = 0 ) ?
NIPS
Title RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist Abstract Retrosynthesis is the process of recursively decomposing target molecules into available building blocks. It plays an important role in solving problems in organic synthesis planning. To automate or assist in the retrosynthesis analysis, various retrosynthesis prediction algorithms have been proposed. However, most of them are cumbersome and lack interpretability about their predictions. In this paper, we devise a novel template-free algorithm for automatic retrosynthetic expansion inspired by how chemists approach retrosynthesis prediction. Our method disassembles retrosynthesis into two steps: i) identify the potential reaction center of the target molecule through a novel graph neural network and generate intermediate synthons, and ii) generate the reactants associated with synthons via a robust reactant generation model. While outperforming the state-of-the-art baselines by a significant margin, our model also provides chemically reasonable interpretation. 1 Introduction Retrosynthesis of the desired compound is commonly constructed by recursively decomposing it into a set of available reaction building blocks. This analysis mode was formalized in the pioneering work [1, 2] and now have become one of the fundamental paradigms in the modern chemical society. Retrosynthesis is challenging, in part due to the huge size of the search space. The reported syntheticorganic knowledge consists of in the order of 107 reactions and compounds [3]. On the other hand, the incomplete understanding of the reaction mechanism also increases the difficulty of retrosynthesis, which is typically undertaken by human experts. Therefore, it is a subjective process and requires considerable expertise and experience. However, molecules may have multiple possible retrosynthetic routes and it is challenging even for experts to select the most appropriate route since the feasibility of a route is often determined by multiple factors, such as the availability of potential reactants, reaction conditions, reaction yield, and potential toxic byproducts. ∗Both authors contribute equally to the work. †This work is done when Chaochao Yan, Qianggang Ding, Shuangjia Zheng, and Jinyu Yang work as interns at Tencent AI Lab. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we focus on the single-step version (predict possible reactants given the product) of retrosynthesis following previous methods [4, 5, 6]. Our method can be decomposed into two subtasks [1, 7]: i) Breaking down the given target molecule into a set of synthons which are hypothetical units representing potential starting reactants in the retrosynthesis of the target, and ii) Calibrating the obtained synthons into a set of reactants, each of which corresponds to an available molecule. Various computational methods [8, 9, 10, 11, 12, 4, 13, 14, 5, 6, 15, 16] have been developed to assist in designing synthetic routes for novel molecules, and these methods can be broadly divided into two template-based and template-free categories. Template-based methods plan retrosynthesis based on hand-encoded rules or reaction templates. Synthia (formerly Chematica) relies on hand-encoded reaction transformation rules [11], and it has been experimentally validated as an efficient software for retrosynthesis [17]. However, it is infeasible to manually encode all the synthesis routes in practice considering the exponential growth in the number of reactions [14]. Reaction templates are often automatically extracted from the reaction databases and appropriate templates are selected to apply to the target [12, 13, 14, 5]. The key process of these approaches is to select relevant templates for the given target. An obvious limitation is that these methods can only infer reactions within the chemical space covered by the template database, preventing them from discovering novel reactions [18]. On the other hand, template-free methods [4, 6, 15] treat the retrosynthesis as a neural machine translation problem, since molecules can be represented as SMILES 3 strings. Although simple and expressive, these models do not fit into the chemists’ analytical process and lack interpretability behind their predictions. Besides, such approaches fail to consider rich chemistry knowledge within the chemical reactions. For example, the generation order of reactants is undetermined in [4, 6, 15] since they ignore the correlation between synthons and reactants, resulting in slower and inferior model convergence. Similar to our method, the concurrent work G2Gs [16] also presents a decomposition and generation two-step framework. G2Gs proposes to incrementally generate reactants from the associated synthons with a variational graph translation model. However, G2Gs can predict at most one bond disconnection which is not universal. Besides, G2Gs independently generates multiple reactants, which ignores the relationship between multiple reactants. To overcome these challenges, inspired by the expert experience from chemists, we devise a two-step framework named as RetroXpert (Retrosynthesis eXpert) to automate the retrosynthesis prediction. Our model tackles it in two steps as shown in Figure 1. Firstly, we propose to identify the potential reaction center within the target molecule using a novel Edge-enhanced Graph Attention Network (EGAT). The reaction center is referred to as the set of bonds that will be disconnected in the retrosynthesis process. Synthons can be obtained by splitting the target molecule according to the reaction center. Secondly, the Reactant Generation Network (RGN) predicts associated reactants given the target molecule and synthons. Different from previous methods [4, 6, 15], the reactant generation order can be uniquely decided in our method, thanks to the intermediate synthons. What is more, we notice that the robustness of the RGN plays an important role. To robustify the RGN, we propose to augment the training data of RGN by incorporating unsuccessful predicted synthons. Our main contributions can be summarized as follows: 1) We propose to identify the potential reaction center with a novel Edge-enhanced Graph Attention Network (EGAT) which is strengthened with chemical knowledge. 2) By splitting the target molecule into synthons, the RGN is able to determine the generation order of reactants. We further propose to augment training data by introducing unsuccessfully predicted synthons, which makes RGN robust and achieves significant improvement. 3) On the standard USPTO-50K dataset [19], our method achieves 70.4% and 65.5% Top-1 accuracy when w/ and wo/ reaction type, respectively, which outperforms SOTA accuracy 63.2% (w/) and 52.6% (wo/) reported in [5] by a large margin. 2 Methodology Given a molecule graph G withN nodes (atoms), we denote the matrix representation of node features as X ∈ RN×M , the tensor representation of edge features as E ∈ RN×N×L, and the adjacency matrix as A ∈ {0, 1}N×N . M and L are feature dimensions of atoms and bonds, respectively. We 3https://www.daylight.com/dayhtml/doc/theory/theory.smiles.html denote as P, S,R the product, synthons, and reactants in the reaction formulation, respectively. The single-step retrosynthesis problem can be described as given the desired product P, seeking for a set of reactants R = {R1, R2, ..., Rn} that can produce the major product P through a valid chemical reaction. It is denoted as P → R (predict R given P), which is the reverse process of the forward reaction prediction problem [20, 21] that predicts the outcome products given a set of reactants. As illustrated in Figure 1, our method decomposes the retrosynthesis task (P→ R) into two closely dependent steps reaction center identification (P→ S) and reactant generation (S→ R). The first step is to identify the potential reaction bonds which will be disconnected during the retrosynthesis, and then the product P can be split into a set of intermediate synthons S = {S1, S2, ..., Sn}. Note that each synthon Si can be regarded as the substructure of a reactant Ri. The second step is to transform synthons S = {S1, S2, ..., Sn} into associated reactants R = {R1, R2, ..., Rn}. Although the intermediate synthons are not needed in retrosynthesis, decomposing the original retrosynthesis task (P→ R) into two dependent procedures can have multiple benefits, which will be elaborated thoroughly in the following sections. 2.1 EGAT for reaction center identification We treat the reaction center identification as a graph-to-graph transformation problem which is similar to the forward reaction outcome prediction [21]. To achieve this, we propose a graph neural network named Edge-enhanced Graph Attention Network (EGAT) which takes the molecule graph G as input and predicts disconnection probability for each bond, and this is the main task. Since a product may be produced by different reactions, there can be multiple reaction centers for a given product and each reaction center corresponds to a different reaction. While current message passing neural networks [22] are shallow and capture only local structure information for each node, and it is difficult to distinguish multiple reaction centers without global information. To alleviate the problem, we add a graph-level auxiliary task to predict the total number of disconnection bonds. As shown in Figure 2, distinct from the Graph Attention Network (GAT) [23] which is designed to learn node and graph-level embeddings, our proposed EGAT also learns edge embedding. It identifies the reaction center by predicting the disconnection probability for each bond taking its edge embedding as input. Given the target G = {A,E,X}, the EGAT layer computes node embedding h′i and edge embedding p′i,j from previous layer’s embeddings hi and pi,j by following equations: where W ∈ RF ′×F , a ∈ R2F ′+D , U ∈ RF×(F ′+D) , and V ∈ RD×(2F+D) are trainable parameters, || means concatenation operation, Ni is all neighbor nodes of the node i, αi,j is the attention weight between the node i and its neighbor node j, and h′i ∈ RF as well as p′i,j ∈ RD are A tt en tio n Li ne ar node h h' node h GAT h' edge p Li ne ar Li ne ar Li ne ar p' EGAT A tt en tio n the output node and edge representations, respectively. Initial input embeddings hi, pi,j are the input node and edge feature vectors xi, ei,j , respectively, which will be detailed later, and in this special case the dimensions F and D equals to the dimensions of associated features, respectively. After stacking multiple EGAT layers, we obtain the final edge representation pi,j for the chemical bond between nodes i and j, as well as the node representation hi for each node i. To predict the disconnection probability for a bond, we perform a fully-connected layer parameterized by wfc ∈ RD and a Sigmoid activation layer to pi,j and its disconnection probability is di,j = Sigmoid(wTfc · pi,j). Note that the multi-head attention mechanism can also be applied like the original GAT. The optimization goal for bond disconnection prediction is to minimize the negative log-likelihood between prediction di,j and ground-truth yi,j ∈ {0, 1} through the binary cross entropy loss function: LM = − 1 K K∑ k=1 ∑ ai,j∈Ak ai,j [(1− yi,j)log(1− di,j) + yi,j log(di,j)], (2) where K is the total number of training reactions and bond (i, j) exists if the associated adjacency element ai,j is nonzero. The ground truth yi,j = 1 means the bond (i, j) is disconnected otherwise remaining the same during the reaction. Bond disconnection labels can be obtained by comparing molecule graphs of target and reactants. The input of the auxiliary task is the graph-level representation hG = READOUT({hi|1 ≤ i ≤ N}), which is the output of the READOUT operation over all learned node representations. We adopts an arithmetic mean as the READOUT function hG = 1N ∑N i=1 hi and it works well in practice. Similarly, a fully-connected layer parameterized by Ws ∈ R(1+Nmax)×F and a Softmax activation function are applied to hG to predict the total number of disconnected bonds, which is solved as a classification problem here. Each category represents the exact number of disconnected bonds, so there are 1+Nmax classification categories. Nmax is the maximum number of possible disconnected bonds in the retrosynthesis. We denote the Softmax output as q = Softmax(Ws · hG). The total number of disconnected bonds for each target molecule is predicted as: n∗ = argmax n (qn) = argmax n (Softmax(Ws · hG)n), 0 ≤ n ≤ Nmax. (3) The ground truth number of disconnections for molecule k is denoted as Nk, the indicator function 1(i,Nk) is 1 if i equals to Nk otherwise it is 0, and the cross entropy loss for the auxiliary task: LA = 1 K K∑ k=1 CrossEntropy(Nk, q k) = − 1 K K∑ k=1 Nmax∑ i=0 1(i,Nk)log(q k i ). (4) Finally, the overall loss function for the EGAT is LEGAT = LM + αLA, where α is fixed to 1 in our study since we empirically find that α is not a sensitive hype-parameter. Atom and bond features. The atom feature consists of a series of general atom information such as atom type, hybridization, and formal charge, while the bond feature is composed of chemical bond information like bond type and conjugation (see Appendix B for details). These features are similar to those used in [24] which is for chemical property prediction. We compute these features using the open-source toolkit RDKit 4. To fully utilize the provided rich atom-mapping information of the USPTO datasets [19] [25], we add a semi-templates indicator to atom feature. For retrosynthesis dataset with given reaction type, a type indicator is also added to the atom feature. Semi-templates. For atom-mapped USPTO datasets, reaction templates are extracted from reaction data like previous template-based methods [12, 14, 5]. However, we are not interested in full reaction templates since these templates are often too specific. There are as many as 11,647 templates for the USPTO-50K train data [5]. Only the product side of templates are kept instead, which we name as semi-templates. Since reaction templates are closely related to the exact reaction, the semi-templates indicator expected to play a significant role in reaction center identification. The semi-templates can be considered as subgraph patterns within molecules. We build a database of semi-templates from training data and find all appeared semi-templates within each molecule. For each atom, we mark the indicator bits associated with appeared semi-templates. Note that each atom within a molecule may belong to several semi-templates since these semi-templates are not mutually exclusive. Although reaction templates are introduced, our method is still template-free since i) only semi-templates are incorporated and our method does not rely on full templates to plan the retrosynthesis, and ii) our EGAT still works well in the absence of semi-templates, with only slight performance degradation (Appendix D.2). 2.2 Reactant generation network Once the reaction center has been identified, synthons can be obtained by applying bond disconnection to decompose the target graph. Since each synthon is basically a substructure within the reactant, we are informed of the total number of reactants and substructures of these reactants. The remaining task S→ R is much simpler than the original P→ R in which even the number of reactants is unknown. Specifically, task S→ R is to generate the set of desired reactants given obtained synthons. Based on commonsense knowledge of chemical reaction, we propose that the ideal RGN should meet following three requirements: R1) be permutation invariant and generate the same set of reactants no matter the order of synthons, R2) all given information should be considered when generating any reactant, and R3) the generation of each reactant also depends on those previously generated reactants. To fulfill these requirements, we represent molecules in SMILES and formulate S→ R as a sequenceto-sequence prediction problem. We convert synthon graphs to SMILES representations using RDKit, though these synthons may be chemically invalid. As in Figure 3, source sequence is the concatenation of possible reaction types, canonical SMILES of the product, and associated synthons. The target sequence is the desired reactants arranged according to synthons. We approximate the requirement R1 by augmenting train samples with reversely arranged synthons and reactants as shown in Figure 3. Our empirical studies demonstrate that such approximation works pretty well in practice. To satisfy the requirement R2, the encoder-decoder attention mechanism [26] [27] is employed, which allows each position in the target sequence attends to all positions in the source sequence. A similar masked self-attention mechanism [27], which masks future positions in the decoder, is adopted to make the RGN meet the requirement R3. 4https://www.rdkit.org Motivated by the success of Transformer [27] in natural machine translation, we build the RGN based on the Transformer module. Transformer is a sequence-to-sequence model equipped with two types of attention mechanisms: self-attention and encoder-decoder attention [27]. Transformer is also adapted for reaction outcome prediction [28] and retrosynthesis [6], in which both products and reactants are represented in SMILES. We include a brief description of Transformer in Appendix C. Determine the generation order of reactants. For the first time, the generation order of reactants can be determined by aligning reactants in the target with synthons in the source, thanks to intermediate synthons which are associated with reactants uniquely. While the generation order of reactants is undetermined in previous methods [4, 6, 15], which naively treats the sequence-to-sequence model as a black box. The uncertainty of the generation order makes their models hard to train. Robustify the RGN. We find the EGAT suffers from distinguishing multiple coexisting reaction centers, which is the major bottleneck of our method. As a result of the failure of identifying the reaction center, the generated synthons are different from the ground truth. To make our RGN robust enough and able to predict the desired reactants even if the EGAT fails to recognize the reaction center, we further augment RGN training data by including those unsuccessfully predicted synthons on training data. We do not reverse the order of synthons for these augmentation samples like in Figure 3. The intuition behind is that EGAT tends to make similar mistakes on training and test datasets since both datasets follow the same distribution. This method can make our RGN able to correct reaction center prediction error and generate the desired set of reactants. 3 Experiments Dataset and preprocessing. We evaluate our method on USPTO-50K [19] and USPTO-full [25] to verify its effectiveness and scalability. USPTO-50K consists of 50K reactions annotated with 10 reaction types (see appendix A for type distribution), which is derived from USPTO granted patents [29]. It is widely used in previous retrosynthesis work. We adopt the same training/validation/test splits in 8:1:1 as [12, 5]. For RGN training data, we add an extra 28K samples of which synthons are reversed as shown in Figure 3 if there are at least two synthons. There are 68K training samples for RGN, which is still denoted as USPTO-50K in the following content. The USPTO-full consists of 950K cleaned reactions from the USPTO 1976-2016 [25], which has 1,808,937 raw reactions without reaction types. Reactions with multiple products are duplicated into multiple single-product ones. After removing invalid reactions (empty reactant and missing atom mappings) and deduplication, we can obtain 950K reactions 5, which are randomly partitioned into training/validation/test sets in 8:1:1. For the EGAT, we build molecule graphs using DGL [30] and extract atom and bond features with RDkit. By comparing molecule graphs of product and reactants, we can identify disconnection bonds within the product graph and obtain training labels for both main and auxiliary tasks. This comparison can be easily done for atom-mapped reactions. For reactions without atom-mapping, a substructure matching algorithm in RDKit can be utilized to accomplish the comparison. We use RDChiral [31] to extract super general reaction templates, and obtain 1859 semi-templates for USPTO-50K training data. Semi-templates that appear less than twice are filtered and finally 654 semi-templates are obtained. As for the RGN, the product molecule graph is divided into synthon graphs according to the ground truth reaction center, then are converted into SMILES strings. The input sequence of RGN is the concatenation of the possible reaction type, product SMILES string, and synthon SMILES strings as illustrated in Figure 3. Implementation. All reactions are represented in canonical SMILES, which are tokenized with the regular expression in [32]. We use DGL [30] and OpenNMT [33] to implement our EGAT and RGN models, respectively. As for the EGAT, we stack three identical four-head attentive layers of which the hidden dimension is 128. All embedding sizes in EGAT are set to 128, such as F , F ′, and D. The Nmax is set to be two to cover 99.97% training samples. We train the EGAT on USPTO-50K for 80 epochs. EGAT parameters are optimized with Adam [34] with default settings, and the initial learning rate is 0.0005 and it is scheduled to multiply 0.2 every 20 epochs. We train the RGN for 300, 000 time steps, and it takes about 30 hours on two GTX 1080 Ti GPUs. We save a checkpoint of 5Code and processed USPTO-full data are available at https://github.com/uta-smile/RetroXpert RGN parameters every 10, 000 steps and average the last 10 checkpoints as the final model. We run all experiments for three times and report the means of their performance in default. Evaluation metric. The Top-N accuracy is used as the evaluation metric for retrosynthesis. Beam search [35] strategy is adopted to keep top K predictions throughout the reactant generation process. K is set to 50 in all experiments. The generated reactants are represented in canonical SMILES. A correct predicted set of reactants must be exactly the same as the ground truth reactants. 3.1 Reaction center identification results To verify the effectiveness of edge-enhanced attention mechanism, we also include the ablation study by removing edge embedding pi,j when computing the coefficient ci,j = LeakyReLU(aT [zi||zj ]). Results are reported in Table 1. The auxiliary task (Aux) can successfully predict the number of disconnection bonds for 99.2% test molecules given the reaction type (Type) while 86.4% if not given. As for the main task (Main) alone, its prediction accuracy is 74.4% w/ reaction type and 51.5% wo/ reaction type. However, if we adopt the prediction from the auxiliary task as the prior of the number of disconnection bonds, and select the most probable disconnection bonds (EGAT), then the prediction accuracy can be boosted to 86.0% (w/) and 64.9% (wo/), respectively. The edge-enhanced attention (EAtt) can consistently improve the model’s performance in all tasks. The improvement is more significant when the reaction type is unknown, so our EGAT is more practical in real world applications without reaction types. This demonstrates that the reaction type information plays an important role in the retrosynthesis. The reactions of the same type usually share similar reaction patterns (involved atoms, bonds, and functional groups), it is much easier to recognize the reaction center if the reaction type is given as the prior knowledge. We also verify the importance of semi-templates in Appendix D.2. 3.2 Reactant prediction results To robustify the RGN as described in the paragraph Robustify the RGN, we also conduct the P→ S prediction on the EGAT training data for USPTO-50K (40K), and the prediction accuracy is 89.0% for the reaction type conditional setting. We can obtain about 4K unsuccessful synthon predictions as augmentation samples (Aug), adding the original 68K RGN training data, the total RGN training data size is 72K. For the unconditional setting, the EGAT accuracy is 70.0% and there are 12K augmentation samples, and the total RGN training size is 80K in this case. We train RGN models on the USPTO-50K with/without the augmentation (Aug), and report results in Table 2. RGN evaluation For the RGN evaluation, the RGN input consists of the ground truth synthons. Therefore the results in Table 2 indicate the upper bound of our method’s overall retrosynthesis performance. The proposed augmentation strategy does not always improve the upper bound. Without given reaction type, the RGN generally performs worse with the augmentation due to the introduced dirty training samples. However, when given reaction type, this augmentation boosts its prediction accuracy. We presume that it is because the reaction type plays a significant role. The RGN learns to put more attention on the reaction type and product instead of synthons to generate the reactants. Retrosynthesis evaluation To evaluate the overall retrosynthesis prediction accuracy, the generated synthons from P→ S instead of the ground truth are input into the RGN. In this way, we only need to compare the predicted reactants with the ground truth ones, without considering if the reaction center predictions correct or not. We report the retrosynthesis results in Tables 3. Our method RetroXpert achieves impressive performance on the test data. Specifically, when given reaction types, our proposed method achieves 70.4% Top-1 accuracy, which outperforms the SOTA Top-1 accuracy 63.2% [5] by a large margin. Note that our Top-1 accuracy 70.4% is quite close to the upper bound 73.4% in Table 2, which indicates the proposed augmentation strategy in Robustify the RGN is considerably effective. As for results wo/ given reaction type, our model improves the SOTA Top-1 accuracy from 52.6% [5] to 65.6%. To verify the effectiveness of augmentation, we conduct ablation study and report results in Appendix D.3. While our method outperforms in Top-1, Top-3, and Top-5 accuracy, template-based methods GLN [5] and RetroSim [12] are better at Top-20 and Top-50 predictions since they enumerate multiple different reaction templates for each product to increase the hit rate. While our RetroXpert is currently designed to find the best set of reactants. To increase the diversity, we can design new strategies to enumerate multiple reaction centers for each product. This is left as the feature work. We notice that the gap between Top-2 and Top-1 accuracy is around 10%. After investigating these 10% predictions by experienced chemists from the synthetic chemistry perspective, we find about 9/10 these Top-1 predictions are actually reasonable (see Appendix E for details). This indicates that our method can learn general chemical reaction knowledge, which is beyond the given ground truth. 4 Large scale experiments To demonstrate the scalability of our method, we also experiment on the USPTO-full dataset, which consists of 760K training data. We extract 75,129 semi-templates and keep only 3,788 ones that appear at least 10 times. We set Nmax as 5 to cover 99.87% training data. We obtain 1.35M training data after reversing synthons. The final accuracy of the P → S on training set is 60.5%, and there are 0.3M unsuccessful synthon data and the total RGN training data size is 1.65M. We train the RGN for 500,000 time steps on USPTO-full while keeping the other settings the same as those in section 3. We run the official implementation of GLN following their instructions [5], as well as the self-implemented SCROP [6] on the USPTO-full dataset. Experimental results are reported at the bottom of Table 3. Our method again significantly outperforms the SCROP and GLN, which demonstrates that our model scales well to the large real-world dataset. Note that both template-free methods SCROP and RetroXpert outperform the GLN significantly, which may indicate the scalability of template-based methods is very limited. 5 Prediction visualization For EGAT, how the auxiliary task helps to identify the reaction center is illustrated in Figure 4. Note that in the first example the two colored bonds and their surrounding structures are very similar. Current shallow GNNs consider only local information and fails to distinguish the true reaction center. Under the guidance of the auxiliary task, EGAT is able to identify the true reaction center. Figure 5 demonstrates the robustness of our method. Even if the predicted synthons are different from the ground truth, the RGN still successfully generates desired reactants. 6 Discussion One major common limitation of current retrosynthesis work is the lack of reasonable evaluation metrics. There may be multiple valid ways to synthesize a product, while the current evaluation metric considers only the given reaction. More evaluation metrics should be proposed in the future. Broader Impact Our proposed new retrosynthesis method RetroXpert solves the retrosynthesis prediction in two steps like chemists do, and it achieves impressive performance. It is template-free and is very scalable to the large real-world dataset. We believe that our work will greatly inspire and advance related research, such as forward reaction prediction and drug discovery. The researchers and industry experts in drug discovery will benefit most from this research since the retrosynthesis prediction is an important part of drug discovery. We are not aware anyone may be put at disadvantage from this research. Our method does not take advantage of the data bias, it is general and scalable. Acknowledgments and Disclosure of Funding We would like to thank Hanjun Dai for providing the source implementation of GLN. This work was partially supported by US National Science Foundation IIS-1718853, the CAREER grant IIS-1553687 and Cancer Prevention and Research Institute of Texas (CPRIT) award (RP190107).
1. What is the focus and contribution of the paper on one-step retrosynthesis prediction? 2. What are the strengths of the proposed approach, particularly in terms of its novelty and performance? 3. What are the weaknesses of the paper, especially regarding its presentation and language? 4. Do you have any concerns or suggestions regarding the methodology or applications of the proposed approach?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This submission proposes a two-stage approach to one-step retrosynthesis prediction where the overall task of proposing reactants from a product molecule is divided into (1) reaction center identification and synthon generation; (2) reactant completion from synthons. Strengths Strong empirical results are obtained on the USPTO-50K benchmark dataset. The approach is novel (note: contemporary work https://arxiv.org/abs/2006.07038) and informed by how domain experts approach the problem. Allowing the Transformer model to correct mistakes from the graph-based model is an interesting and successful approach. Weaknesses The main weakness of this work is in the presentation and language. It would be difficult for a non-expert to follow details about the methods at times. One could argue that the architectures used are not novel from a machine learning perspective, but I believe it is an interesting application nonetheless.
NIPS
Title RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist Abstract Retrosynthesis is the process of recursively decomposing target molecules into available building blocks. It plays an important role in solving problems in organic synthesis planning. To automate or assist in the retrosynthesis analysis, various retrosynthesis prediction algorithms have been proposed. However, most of them are cumbersome and lack interpretability about their predictions. In this paper, we devise a novel template-free algorithm for automatic retrosynthetic expansion inspired by how chemists approach retrosynthesis prediction. Our method disassembles retrosynthesis into two steps: i) identify the potential reaction center of the target molecule through a novel graph neural network and generate intermediate synthons, and ii) generate the reactants associated with synthons via a robust reactant generation model. While outperforming the state-of-the-art baselines by a significant margin, our model also provides chemically reasonable interpretation. 1 Introduction Retrosynthesis of the desired compound is commonly constructed by recursively decomposing it into a set of available reaction building blocks. This analysis mode was formalized in the pioneering work [1, 2] and now have become one of the fundamental paradigms in the modern chemical society. Retrosynthesis is challenging, in part due to the huge size of the search space. The reported syntheticorganic knowledge consists of in the order of 107 reactions and compounds [3]. On the other hand, the incomplete understanding of the reaction mechanism also increases the difficulty of retrosynthesis, which is typically undertaken by human experts. Therefore, it is a subjective process and requires considerable expertise and experience. However, molecules may have multiple possible retrosynthetic routes and it is challenging even for experts to select the most appropriate route since the feasibility of a route is often determined by multiple factors, such as the availability of potential reactants, reaction conditions, reaction yield, and potential toxic byproducts. ∗Both authors contribute equally to the work. †This work is done when Chaochao Yan, Qianggang Ding, Shuangjia Zheng, and Jinyu Yang work as interns at Tencent AI Lab. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we focus on the single-step version (predict possible reactants given the product) of retrosynthesis following previous methods [4, 5, 6]. Our method can be decomposed into two subtasks [1, 7]: i) Breaking down the given target molecule into a set of synthons which are hypothetical units representing potential starting reactants in the retrosynthesis of the target, and ii) Calibrating the obtained synthons into a set of reactants, each of which corresponds to an available molecule. Various computational methods [8, 9, 10, 11, 12, 4, 13, 14, 5, 6, 15, 16] have been developed to assist in designing synthetic routes for novel molecules, and these methods can be broadly divided into two template-based and template-free categories. Template-based methods plan retrosynthesis based on hand-encoded rules or reaction templates. Synthia (formerly Chematica) relies on hand-encoded reaction transformation rules [11], and it has been experimentally validated as an efficient software for retrosynthesis [17]. However, it is infeasible to manually encode all the synthesis routes in practice considering the exponential growth in the number of reactions [14]. Reaction templates are often automatically extracted from the reaction databases and appropriate templates are selected to apply to the target [12, 13, 14, 5]. The key process of these approaches is to select relevant templates for the given target. An obvious limitation is that these methods can only infer reactions within the chemical space covered by the template database, preventing them from discovering novel reactions [18]. On the other hand, template-free methods [4, 6, 15] treat the retrosynthesis as a neural machine translation problem, since molecules can be represented as SMILES 3 strings. Although simple and expressive, these models do not fit into the chemists’ analytical process and lack interpretability behind their predictions. Besides, such approaches fail to consider rich chemistry knowledge within the chemical reactions. For example, the generation order of reactants is undetermined in [4, 6, 15] since they ignore the correlation between synthons and reactants, resulting in slower and inferior model convergence. Similar to our method, the concurrent work G2Gs [16] also presents a decomposition and generation two-step framework. G2Gs proposes to incrementally generate reactants from the associated synthons with a variational graph translation model. However, G2Gs can predict at most one bond disconnection which is not universal. Besides, G2Gs independently generates multiple reactants, which ignores the relationship between multiple reactants. To overcome these challenges, inspired by the expert experience from chemists, we devise a two-step framework named as RetroXpert (Retrosynthesis eXpert) to automate the retrosynthesis prediction. Our model tackles it in two steps as shown in Figure 1. Firstly, we propose to identify the potential reaction center within the target molecule using a novel Edge-enhanced Graph Attention Network (EGAT). The reaction center is referred to as the set of bonds that will be disconnected in the retrosynthesis process. Synthons can be obtained by splitting the target molecule according to the reaction center. Secondly, the Reactant Generation Network (RGN) predicts associated reactants given the target molecule and synthons. Different from previous methods [4, 6, 15], the reactant generation order can be uniquely decided in our method, thanks to the intermediate synthons. What is more, we notice that the robustness of the RGN plays an important role. To robustify the RGN, we propose to augment the training data of RGN by incorporating unsuccessful predicted synthons. Our main contributions can be summarized as follows: 1) We propose to identify the potential reaction center with a novel Edge-enhanced Graph Attention Network (EGAT) which is strengthened with chemical knowledge. 2) By splitting the target molecule into synthons, the RGN is able to determine the generation order of reactants. We further propose to augment training data by introducing unsuccessfully predicted synthons, which makes RGN robust and achieves significant improvement. 3) On the standard USPTO-50K dataset [19], our method achieves 70.4% and 65.5% Top-1 accuracy when w/ and wo/ reaction type, respectively, which outperforms SOTA accuracy 63.2% (w/) and 52.6% (wo/) reported in [5] by a large margin. 2 Methodology Given a molecule graph G withN nodes (atoms), we denote the matrix representation of node features as X ∈ RN×M , the tensor representation of edge features as E ∈ RN×N×L, and the adjacency matrix as A ∈ {0, 1}N×N . M and L are feature dimensions of atoms and bonds, respectively. We 3https://www.daylight.com/dayhtml/doc/theory/theory.smiles.html denote as P, S,R the product, synthons, and reactants in the reaction formulation, respectively. The single-step retrosynthesis problem can be described as given the desired product P, seeking for a set of reactants R = {R1, R2, ..., Rn} that can produce the major product P through a valid chemical reaction. It is denoted as P → R (predict R given P), which is the reverse process of the forward reaction prediction problem [20, 21] that predicts the outcome products given a set of reactants. As illustrated in Figure 1, our method decomposes the retrosynthesis task (P→ R) into two closely dependent steps reaction center identification (P→ S) and reactant generation (S→ R). The first step is to identify the potential reaction bonds which will be disconnected during the retrosynthesis, and then the product P can be split into a set of intermediate synthons S = {S1, S2, ..., Sn}. Note that each synthon Si can be regarded as the substructure of a reactant Ri. The second step is to transform synthons S = {S1, S2, ..., Sn} into associated reactants R = {R1, R2, ..., Rn}. Although the intermediate synthons are not needed in retrosynthesis, decomposing the original retrosynthesis task (P→ R) into two dependent procedures can have multiple benefits, which will be elaborated thoroughly in the following sections. 2.1 EGAT for reaction center identification We treat the reaction center identification as a graph-to-graph transformation problem which is similar to the forward reaction outcome prediction [21]. To achieve this, we propose a graph neural network named Edge-enhanced Graph Attention Network (EGAT) which takes the molecule graph G as input and predicts disconnection probability for each bond, and this is the main task. Since a product may be produced by different reactions, there can be multiple reaction centers for a given product and each reaction center corresponds to a different reaction. While current message passing neural networks [22] are shallow and capture only local structure information for each node, and it is difficult to distinguish multiple reaction centers without global information. To alleviate the problem, we add a graph-level auxiliary task to predict the total number of disconnection bonds. As shown in Figure 2, distinct from the Graph Attention Network (GAT) [23] which is designed to learn node and graph-level embeddings, our proposed EGAT also learns edge embedding. It identifies the reaction center by predicting the disconnection probability for each bond taking its edge embedding as input. Given the target G = {A,E,X}, the EGAT layer computes node embedding h′i and edge embedding p′i,j from previous layer’s embeddings hi and pi,j by following equations: where W ∈ RF ′×F , a ∈ R2F ′+D , U ∈ RF×(F ′+D) , and V ∈ RD×(2F+D) are trainable parameters, || means concatenation operation, Ni is all neighbor nodes of the node i, αi,j is the attention weight between the node i and its neighbor node j, and h′i ∈ RF as well as p′i,j ∈ RD are A tt en tio n Li ne ar node h h' node h GAT h' edge p Li ne ar Li ne ar Li ne ar p' EGAT A tt en tio n the output node and edge representations, respectively. Initial input embeddings hi, pi,j are the input node and edge feature vectors xi, ei,j , respectively, which will be detailed later, and in this special case the dimensions F and D equals to the dimensions of associated features, respectively. After stacking multiple EGAT layers, we obtain the final edge representation pi,j for the chemical bond between nodes i and j, as well as the node representation hi for each node i. To predict the disconnection probability for a bond, we perform a fully-connected layer parameterized by wfc ∈ RD and a Sigmoid activation layer to pi,j and its disconnection probability is di,j = Sigmoid(wTfc · pi,j). Note that the multi-head attention mechanism can also be applied like the original GAT. The optimization goal for bond disconnection prediction is to minimize the negative log-likelihood between prediction di,j and ground-truth yi,j ∈ {0, 1} through the binary cross entropy loss function: LM = − 1 K K∑ k=1 ∑ ai,j∈Ak ai,j [(1− yi,j)log(1− di,j) + yi,j log(di,j)], (2) where K is the total number of training reactions and bond (i, j) exists if the associated adjacency element ai,j is nonzero. The ground truth yi,j = 1 means the bond (i, j) is disconnected otherwise remaining the same during the reaction. Bond disconnection labels can be obtained by comparing molecule graphs of target and reactants. The input of the auxiliary task is the graph-level representation hG = READOUT({hi|1 ≤ i ≤ N}), which is the output of the READOUT operation over all learned node representations. We adopts an arithmetic mean as the READOUT function hG = 1N ∑N i=1 hi and it works well in practice. Similarly, a fully-connected layer parameterized by Ws ∈ R(1+Nmax)×F and a Softmax activation function are applied to hG to predict the total number of disconnected bonds, which is solved as a classification problem here. Each category represents the exact number of disconnected bonds, so there are 1+Nmax classification categories. Nmax is the maximum number of possible disconnected bonds in the retrosynthesis. We denote the Softmax output as q = Softmax(Ws · hG). The total number of disconnected bonds for each target molecule is predicted as: n∗ = argmax n (qn) = argmax n (Softmax(Ws · hG)n), 0 ≤ n ≤ Nmax. (3) The ground truth number of disconnections for molecule k is denoted as Nk, the indicator function 1(i,Nk) is 1 if i equals to Nk otherwise it is 0, and the cross entropy loss for the auxiliary task: LA = 1 K K∑ k=1 CrossEntropy(Nk, q k) = − 1 K K∑ k=1 Nmax∑ i=0 1(i,Nk)log(q k i ). (4) Finally, the overall loss function for the EGAT is LEGAT = LM + αLA, where α is fixed to 1 in our study since we empirically find that α is not a sensitive hype-parameter. Atom and bond features. The atom feature consists of a series of general atom information such as atom type, hybridization, and formal charge, while the bond feature is composed of chemical bond information like bond type and conjugation (see Appendix B for details). These features are similar to those used in [24] which is for chemical property prediction. We compute these features using the open-source toolkit RDKit 4. To fully utilize the provided rich atom-mapping information of the USPTO datasets [19] [25], we add a semi-templates indicator to atom feature. For retrosynthesis dataset with given reaction type, a type indicator is also added to the atom feature. Semi-templates. For atom-mapped USPTO datasets, reaction templates are extracted from reaction data like previous template-based methods [12, 14, 5]. However, we are not interested in full reaction templates since these templates are often too specific. There are as many as 11,647 templates for the USPTO-50K train data [5]. Only the product side of templates are kept instead, which we name as semi-templates. Since reaction templates are closely related to the exact reaction, the semi-templates indicator expected to play a significant role in reaction center identification. The semi-templates can be considered as subgraph patterns within molecules. We build a database of semi-templates from training data and find all appeared semi-templates within each molecule. For each atom, we mark the indicator bits associated with appeared semi-templates. Note that each atom within a molecule may belong to several semi-templates since these semi-templates are not mutually exclusive. Although reaction templates are introduced, our method is still template-free since i) only semi-templates are incorporated and our method does not rely on full templates to plan the retrosynthesis, and ii) our EGAT still works well in the absence of semi-templates, with only slight performance degradation (Appendix D.2). 2.2 Reactant generation network Once the reaction center has been identified, synthons can be obtained by applying bond disconnection to decompose the target graph. Since each synthon is basically a substructure within the reactant, we are informed of the total number of reactants and substructures of these reactants. The remaining task S→ R is much simpler than the original P→ R in which even the number of reactants is unknown. Specifically, task S→ R is to generate the set of desired reactants given obtained synthons. Based on commonsense knowledge of chemical reaction, we propose that the ideal RGN should meet following three requirements: R1) be permutation invariant and generate the same set of reactants no matter the order of synthons, R2) all given information should be considered when generating any reactant, and R3) the generation of each reactant also depends on those previously generated reactants. To fulfill these requirements, we represent molecules in SMILES and formulate S→ R as a sequenceto-sequence prediction problem. We convert synthon graphs to SMILES representations using RDKit, though these synthons may be chemically invalid. As in Figure 3, source sequence is the concatenation of possible reaction types, canonical SMILES of the product, and associated synthons. The target sequence is the desired reactants arranged according to synthons. We approximate the requirement R1 by augmenting train samples with reversely arranged synthons and reactants as shown in Figure 3. Our empirical studies demonstrate that such approximation works pretty well in practice. To satisfy the requirement R2, the encoder-decoder attention mechanism [26] [27] is employed, which allows each position in the target sequence attends to all positions in the source sequence. A similar masked self-attention mechanism [27], which masks future positions in the decoder, is adopted to make the RGN meet the requirement R3. 4https://www.rdkit.org Motivated by the success of Transformer [27] in natural machine translation, we build the RGN based on the Transformer module. Transformer is a sequence-to-sequence model equipped with two types of attention mechanisms: self-attention and encoder-decoder attention [27]. Transformer is also adapted for reaction outcome prediction [28] and retrosynthesis [6], in which both products and reactants are represented in SMILES. We include a brief description of Transformer in Appendix C. Determine the generation order of reactants. For the first time, the generation order of reactants can be determined by aligning reactants in the target with synthons in the source, thanks to intermediate synthons which are associated with reactants uniquely. While the generation order of reactants is undetermined in previous methods [4, 6, 15], which naively treats the sequence-to-sequence model as a black box. The uncertainty of the generation order makes their models hard to train. Robustify the RGN. We find the EGAT suffers from distinguishing multiple coexisting reaction centers, which is the major bottleneck of our method. As a result of the failure of identifying the reaction center, the generated synthons are different from the ground truth. To make our RGN robust enough and able to predict the desired reactants even if the EGAT fails to recognize the reaction center, we further augment RGN training data by including those unsuccessfully predicted synthons on training data. We do not reverse the order of synthons for these augmentation samples like in Figure 3. The intuition behind is that EGAT tends to make similar mistakes on training and test datasets since both datasets follow the same distribution. This method can make our RGN able to correct reaction center prediction error and generate the desired set of reactants. 3 Experiments Dataset and preprocessing. We evaluate our method on USPTO-50K [19] and USPTO-full [25] to verify its effectiveness and scalability. USPTO-50K consists of 50K reactions annotated with 10 reaction types (see appendix A for type distribution), which is derived from USPTO granted patents [29]. It is widely used in previous retrosynthesis work. We adopt the same training/validation/test splits in 8:1:1 as [12, 5]. For RGN training data, we add an extra 28K samples of which synthons are reversed as shown in Figure 3 if there are at least two synthons. There are 68K training samples for RGN, which is still denoted as USPTO-50K in the following content. The USPTO-full consists of 950K cleaned reactions from the USPTO 1976-2016 [25], which has 1,808,937 raw reactions without reaction types. Reactions with multiple products are duplicated into multiple single-product ones. After removing invalid reactions (empty reactant and missing atom mappings) and deduplication, we can obtain 950K reactions 5, which are randomly partitioned into training/validation/test sets in 8:1:1. For the EGAT, we build molecule graphs using DGL [30] and extract atom and bond features with RDkit. By comparing molecule graphs of product and reactants, we can identify disconnection bonds within the product graph and obtain training labels for both main and auxiliary tasks. This comparison can be easily done for atom-mapped reactions. For reactions without atom-mapping, a substructure matching algorithm in RDKit can be utilized to accomplish the comparison. We use RDChiral [31] to extract super general reaction templates, and obtain 1859 semi-templates for USPTO-50K training data. Semi-templates that appear less than twice are filtered and finally 654 semi-templates are obtained. As for the RGN, the product molecule graph is divided into synthon graphs according to the ground truth reaction center, then are converted into SMILES strings. The input sequence of RGN is the concatenation of the possible reaction type, product SMILES string, and synthon SMILES strings as illustrated in Figure 3. Implementation. All reactions are represented in canonical SMILES, which are tokenized with the regular expression in [32]. We use DGL [30] and OpenNMT [33] to implement our EGAT and RGN models, respectively. As for the EGAT, we stack three identical four-head attentive layers of which the hidden dimension is 128. All embedding sizes in EGAT are set to 128, such as F , F ′, and D. The Nmax is set to be two to cover 99.97% training samples. We train the EGAT on USPTO-50K for 80 epochs. EGAT parameters are optimized with Adam [34] with default settings, and the initial learning rate is 0.0005 and it is scheduled to multiply 0.2 every 20 epochs. We train the RGN for 300, 000 time steps, and it takes about 30 hours on two GTX 1080 Ti GPUs. We save a checkpoint of 5Code and processed USPTO-full data are available at https://github.com/uta-smile/RetroXpert RGN parameters every 10, 000 steps and average the last 10 checkpoints as the final model. We run all experiments for three times and report the means of their performance in default. Evaluation metric. The Top-N accuracy is used as the evaluation metric for retrosynthesis. Beam search [35] strategy is adopted to keep top K predictions throughout the reactant generation process. K is set to 50 in all experiments. The generated reactants are represented in canonical SMILES. A correct predicted set of reactants must be exactly the same as the ground truth reactants. 3.1 Reaction center identification results To verify the effectiveness of edge-enhanced attention mechanism, we also include the ablation study by removing edge embedding pi,j when computing the coefficient ci,j = LeakyReLU(aT [zi||zj ]). Results are reported in Table 1. The auxiliary task (Aux) can successfully predict the number of disconnection bonds for 99.2% test molecules given the reaction type (Type) while 86.4% if not given. As for the main task (Main) alone, its prediction accuracy is 74.4% w/ reaction type and 51.5% wo/ reaction type. However, if we adopt the prediction from the auxiliary task as the prior of the number of disconnection bonds, and select the most probable disconnection bonds (EGAT), then the prediction accuracy can be boosted to 86.0% (w/) and 64.9% (wo/), respectively. The edge-enhanced attention (EAtt) can consistently improve the model’s performance in all tasks. The improvement is more significant when the reaction type is unknown, so our EGAT is more practical in real world applications without reaction types. This demonstrates that the reaction type information plays an important role in the retrosynthesis. The reactions of the same type usually share similar reaction patterns (involved atoms, bonds, and functional groups), it is much easier to recognize the reaction center if the reaction type is given as the prior knowledge. We also verify the importance of semi-templates in Appendix D.2. 3.2 Reactant prediction results To robustify the RGN as described in the paragraph Robustify the RGN, we also conduct the P→ S prediction on the EGAT training data for USPTO-50K (40K), and the prediction accuracy is 89.0% for the reaction type conditional setting. We can obtain about 4K unsuccessful synthon predictions as augmentation samples (Aug), adding the original 68K RGN training data, the total RGN training data size is 72K. For the unconditional setting, the EGAT accuracy is 70.0% and there are 12K augmentation samples, and the total RGN training size is 80K in this case. We train RGN models on the USPTO-50K with/without the augmentation (Aug), and report results in Table 2. RGN evaluation For the RGN evaluation, the RGN input consists of the ground truth synthons. Therefore the results in Table 2 indicate the upper bound of our method’s overall retrosynthesis performance. The proposed augmentation strategy does not always improve the upper bound. Without given reaction type, the RGN generally performs worse with the augmentation due to the introduced dirty training samples. However, when given reaction type, this augmentation boosts its prediction accuracy. We presume that it is because the reaction type plays a significant role. The RGN learns to put more attention on the reaction type and product instead of synthons to generate the reactants. Retrosynthesis evaluation To evaluate the overall retrosynthesis prediction accuracy, the generated synthons from P→ S instead of the ground truth are input into the RGN. In this way, we only need to compare the predicted reactants with the ground truth ones, without considering if the reaction center predictions correct or not. We report the retrosynthesis results in Tables 3. Our method RetroXpert achieves impressive performance on the test data. Specifically, when given reaction types, our proposed method achieves 70.4% Top-1 accuracy, which outperforms the SOTA Top-1 accuracy 63.2% [5] by a large margin. Note that our Top-1 accuracy 70.4% is quite close to the upper bound 73.4% in Table 2, which indicates the proposed augmentation strategy in Robustify the RGN is considerably effective. As for results wo/ given reaction type, our model improves the SOTA Top-1 accuracy from 52.6% [5] to 65.6%. To verify the effectiveness of augmentation, we conduct ablation study and report results in Appendix D.3. While our method outperforms in Top-1, Top-3, and Top-5 accuracy, template-based methods GLN [5] and RetroSim [12] are better at Top-20 and Top-50 predictions since they enumerate multiple different reaction templates for each product to increase the hit rate. While our RetroXpert is currently designed to find the best set of reactants. To increase the diversity, we can design new strategies to enumerate multiple reaction centers for each product. This is left as the feature work. We notice that the gap between Top-2 and Top-1 accuracy is around 10%. After investigating these 10% predictions by experienced chemists from the synthetic chemistry perspective, we find about 9/10 these Top-1 predictions are actually reasonable (see Appendix E for details). This indicates that our method can learn general chemical reaction knowledge, which is beyond the given ground truth. 4 Large scale experiments To demonstrate the scalability of our method, we also experiment on the USPTO-full dataset, which consists of 760K training data. We extract 75,129 semi-templates and keep only 3,788 ones that appear at least 10 times. We set Nmax as 5 to cover 99.87% training data. We obtain 1.35M training data after reversing synthons. The final accuracy of the P → S on training set is 60.5%, and there are 0.3M unsuccessful synthon data and the total RGN training data size is 1.65M. We train the RGN for 500,000 time steps on USPTO-full while keeping the other settings the same as those in section 3. We run the official implementation of GLN following their instructions [5], as well as the self-implemented SCROP [6] on the USPTO-full dataset. Experimental results are reported at the bottom of Table 3. Our method again significantly outperforms the SCROP and GLN, which demonstrates that our model scales well to the large real-world dataset. Note that both template-free methods SCROP and RetroXpert outperform the GLN significantly, which may indicate the scalability of template-based methods is very limited. 5 Prediction visualization For EGAT, how the auxiliary task helps to identify the reaction center is illustrated in Figure 4. Note that in the first example the two colored bonds and their surrounding structures are very similar. Current shallow GNNs consider only local information and fails to distinguish the true reaction center. Under the guidance of the auxiliary task, EGAT is able to identify the true reaction center. Figure 5 demonstrates the robustness of our method. Even if the predicted synthons are different from the ground truth, the RGN still successfully generates desired reactants. 6 Discussion One major common limitation of current retrosynthesis work is the lack of reasonable evaluation metrics. There may be multiple valid ways to synthesize a product, while the current evaluation metric considers only the given reaction. More evaluation metrics should be proposed in the future. Broader Impact Our proposed new retrosynthesis method RetroXpert solves the retrosynthesis prediction in two steps like chemists do, and it achieves impressive performance. It is template-free and is very scalable to the large real-world dataset. We believe that our work will greatly inspire and advance related research, such as forward reaction prediction and drug discovery. The researchers and industry experts in drug discovery will benefit most from this research since the retrosynthesis prediction is an important part of drug discovery. We are not aware anyone may be put at disadvantage from this research. Our method does not take advantage of the data bias, it is general and scalable. Acknowledgments and Disclosure of Funding We would like to thank Hanjun Dai for providing the source implementation of GLN. This work was partially supported by US National Science Foundation IIS-1718853, the CAREER grant IIS-1553687 and Cancer Prevention and Research Institute of Texas (CPRIT) award (RP190107).
1. What is the main contribution of the paper in the field of retrosynthesis? 2. What are the strengths of the proposed approach, particularly in terms of its performance on the USPTO-50k and USPTO-full datasets? 3. What are the weaknesses of the paper regarding potential biases in the dataset and its applicability to multistep retrosynthesis challenges? 4. How does the reviewer assess the novelty and significance of the Edge-enhanced Graph Attention Network and the reactant generation network? 5. What are the suggestions for improving the paper, such as addressing potential biases and extending the approach to multistep retrosynthesis?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a novel, template free algorithm for retrosynthesis by automating the protocol typically used by chemists to predict possible reactants when given the product. This involves two steps, (i) identify the potential reaction center and split the molecule according to this hypothesis, and (ii) predict the corresponding reactants. For reaction center identification, the paper presents an interesting Edge-enhanced Graph Attention Network that adds a graph-level auxiliary task to predict the total number of disconnection bonds. The authors then use a transformer to build the reactant generation network. To test their approach, the authors carry out experiments using the well known USPTO-50k and USPTO-full datasets. Performance is evaluated using Top-N accuracy, and the method proposed in this paper outperforms the existing SOTA Top-1 accuracy by some margin on both the USPTO-50k and USPTO-full dataset. Strengths This paper presents a novel approach that generates considerable improvement in top-1 performance on the USPTO-50k dataset. Weaknesses No comment is made about possible biases in the USPTO dataset, and how these might affect the reported results. It would be good to check whether the method suggested here exploits any of these biases by comparing performance on a different independently constructed dataset. Please could the authors address the question of how their approach to the single step problem can be extended to apply to multistep retrosynthesis challenges. Furthermore, a more comprehensive survey of recent literature in this area should be presented, particularly recent template free approaches.
NIPS
Title RetroXpert: Decompose Retrosynthesis Prediction Like A Chemist Abstract Retrosynthesis is the process of recursively decomposing target molecules into available building blocks. It plays an important role in solving problems in organic synthesis planning. To automate or assist in the retrosynthesis analysis, various retrosynthesis prediction algorithms have been proposed. However, most of them are cumbersome and lack interpretability about their predictions. In this paper, we devise a novel template-free algorithm for automatic retrosynthetic expansion inspired by how chemists approach retrosynthesis prediction. Our method disassembles retrosynthesis into two steps: i) identify the potential reaction center of the target molecule through a novel graph neural network and generate intermediate synthons, and ii) generate the reactants associated with synthons via a robust reactant generation model. While outperforming the state-of-the-art baselines by a significant margin, our model also provides chemically reasonable interpretation. 1 Introduction Retrosynthesis of the desired compound is commonly constructed by recursively decomposing it into a set of available reaction building blocks. This analysis mode was formalized in the pioneering work [1, 2] and now have become one of the fundamental paradigms in the modern chemical society. Retrosynthesis is challenging, in part due to the huge size of the search space. The reported syntheticorganic knowledge consists of in the order of 107 reactions and compounds [3]. On the other hand, the incomplete understanding of the reaction mechanism also increases the difficulty of retrosynthesis, which is typically undertaken by human experts. Therefore, it is a subjective process and requires considerable expertise and experience. However, molecules may have multiple possible retrosynthetic routes and it is challenging even for experts to select the most appropriate route since the feasibility of a route is often determined by multiple factors, such as the availability of potential reactants, reaction conditions, reaction yield, and potential toxic byproducts. ∗Both authors contribute equally to the work. †This work is done when Chaochao Yan, Qianggang Ding, Shuangjia Zheng, and Jinyu Yang work as interns at Tencent AI Lab. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. In this work, we focus on the single-step version (predict possible reactants given the product) of retrosynthesis following previous methods [4, 5, 6]. Our method can be decomposed into two subtasks [1, 7]: i) Breaking down the given target molecule into a set of synthons which are hypothetical units representing potential starting reactants in the retrosynthesis of the target, and ii) Calibrating the obtained synthons into a set of reactants, each of which corresponds to an available molecule. Various computational methods [8, 9, 10, 11, 12, 4, 13, 14, 5, 6, 15, 16] have been developed to assist in designing synthetic routes for novel molecules, and these methods can be broadly divided into two template-based and template-free categories. Template-based methods plan retrosynthesis based on hand-encoded rules or reaction templates. Synthia (formerly Chematica) relies on hand-encoded reaction transformation rules [11], and it has been experimentally validated as an efficient software for retrosynthesis [17]. However, it is infeasible to manually encode all the synthesis routes in practice considering the exponential growth in the number of reactions [14]. Reaction templates are often automatically extracted from the reaction databases and appropriate templates are selected to apply to the target [12, 13, 14, 5]. The key process of these approaches is to select relevant templates for the given target. An obvious limitation is that these methods can only infer reactions within the chemical space covered by the template database, preventing them from discovering novel reactions [18]. On the other hand, template-free methods [4, 6, 15] treat the retrosynthesis as a neural machine translation problem, since molecules can be represented as SMILES 3 strings. Although simple and expressive, these models do not fit into the chemists’ analytical process and lack interpretability behind their predictions. Besides, such approaches fail to consider rich chemistry knowledge within the chemical reactions. For example, the generation order of reactants is undetermined in [4, 6, 15] since they ignore the correlation between synthons and reactants, resulting in slower and inferior model convergence. Similar to our method, the concurrent work G2Gs [16] also presents a decomposition and generation two-step framework. G2Gs proposes to incrementally generate reactants from the associated synthons with a variational graph translation model. However, G2Gs can predict at most one bond disconnection which is not universal. Besides, G2Gs independently generates multiple reactants, which ignores the relationship between multiple reactants. To overcome these challenges, inspired by the expert experience from chemists, we devise a two-step framework named as RetroXpert (Retrosynthesis eXpert) to automate the retrosynthesis prediction. Our model tackles it in two steps as shown in Figure 1. Firstly, we propose to identify the potential reaction center within the target molecule using a novel Edge-enhanced Graph Attention Network (EGAT). The reaction center is referred to as the set of bonds that will be disconnected in the retrosynthesis process. Synthons can be obtained by splitting the target molecule according to the reaction center. Secondly, the Reactant Generation Network (RGN) predicts associated reactants given the target molecule and synthons. Different from previous methods [4, 6, 15], the reactant generation order can be uniquely decided in our method, thanks to the intermediate synthons. What is more, we notice that the robustness of the RGN plays an important role. To robustify the RGN, we propose to augment the training data of RGN by incorporating unsuccessful predicted synthons. Our main contributions can be summarized as follows: 1) We propose to identify the potential reaction center with a novel Edge-enhanced Graph Attention Network (EGAT) which is strengthened with chemical knowledge. 2) By splitting the target molecule into synthons, the RGN is able to determine the generation order of reactants. We further propose to augment training data by introducing unsuccessfully predicted synthons, which makes RGN robust and achieves significant improvement. 3) On the standard USPTO-50K dataset [19], our method achieves 70.4% and 65.5% Top-1 accuracy when w/ and wo/ reaction type, respectively, which outperforms SOTA accuracy 63.2% (w/) and 52.6% (wo/) reported in [5] by a large margin. 2 Methodology Given a molecule graph G withN nodes (atoms), we denote the matrix representation of node features as X ∈ RN×M , the tensor representation of edge features as E ∈ RN×N×L, and the adjacency matrix as A ∈ {0, 1}N×N . M and L are feature dimensions of atoms and bonds, respectively. We 3https://www.daylight.com/dayhtml/doc/theory/theory.smiles.html denote as P, S,R the product, synthons, and reactants in the reaction formulation, respectively. The single-step retrosynthesis problem can be described as given the desired product P, seeking for a set of reactants R = {R1, R2, ..., Rn} that can produce the major product P through a valid chemical reaction. It is denoted as P → R (predict R given P), which is the reverse process of the forward reaction prediction problem [20, 21] that predicts the outcome products given a set of reactants. As illustrated in Figure 1, our method decomposes the retrosynthesis task (P→ R) into two closely dependent steps reaction center identification (P→ S) and reactant generation (S→ R). The first step is to identify the potential reaction bonds which will be disconnected during the retrosynthesis, and then the product P can be split into a set of intermediate synthons S = {S1, S2, ..., Sn}. Note that each synthon Si can be regarded as the substructure of a reactant Ri. The second step is to transform synthons S = {S1, S2, ..., Sn} into associated reactants R = {R1, R2, ..., Rn}. Although the intermediate synthons are not needed in retrosynthesis, decomposing the original retrosynthesis task (P→ R) into two dependent procedures can have multiple benefits, which will be elaborated thoroughly in the following sections. 2.1 EGAT for reaction center identification We treat the reaction center identification as a graph-to-graph transformation problem which is similar to the forward reaction outcome prediction [21]. To achieve this, we propose a graph neural network named Edge-enhanced Graph Attention Network (EGAT) which takes the molecule graph G as input and predicts disconnection probability for each bond, and this is the main task. Since a product may be produced by different reactions, there can be multiple reaction centers for a given product and each reaction center corresponds to a different reaction. While current message passing neural networks [22] are shallow and capture only local structure information for each node, and it is difficult to distinguish multiple reaction centers without global information. To alleviate the problem, we add a graph-level auxiliary task to predict the total number of disconnection bonds. As shown in Figure 2, distinct from the Graph Attention Network (GAT) [23] which is designed to learn node and graph-level embeddings, our proposed EGAT also learns edge embedding. It identifies the reaction center by predicting the disconnection probability for each bond taking its edge embedding as input. Given the target G = {A,E,X}, the EGAT layer computes node embedding h′i and edge embedding p′i,j from previous layer’s embeddings hi and pi,j by following equations: where W ∈ RF ′×F , a ∈ R2F ′+D , U ∈ RF×(F ′+D) , and V ∈ RD×(2F+D) are trainable parameters, || means concatenation operation, Ni is all neighbor nodes of the node i, αi,j is the attention weight between the node i and its neighbor node j, and h′i ∈ RF as well as p′i,j ∈ RD are A tt en tio n Li ne ar node h h' node h GAT h' edge p Li ne ar Li ne ar Li ne ar p' EGAT A tt en tio n the output node and edge representations, respectively. Initial input embeddings hi, pi,j are the input node and edge feature vectors xi, ei,j , respectively, which will be detailed later, and in this special case the dimensions F and D equals to the dimensions of associated features, respectively. After stacking multiple EGAT layers, we obtain the final edge representation pi,j for the chemical bond between nodes i and j, as well as the node representation hi for each node i. To predict the disconnection probability for a bond, we perform a fully-connected layer parameterized by wfc ∈ RD and a Sigmoid activation layer to pi,j and its disconnection probability is di,j = Sigmoid(wTfc · pi,j). Note that the multi-head attention mechanism can also be applied like the original GAT. The optimization goal for bond disconnection prediction is to minimize the negative log-likelihood between prediction di,j and ground-truth yi,j ∈ {0, 1} through the binary cross entropy loss function: LM = − 1 K K∑ k=1 ∑ ai,j∈Ak ai,j [(1− yi,j)log(1− di,j) + yi,j log(di,j)], (2) where K is the total number of training reactions and bond (i, j) exists if the associated adjacency element ai,j is nonzero. The ground truth yi,j = 1 means the bond (i, j) is disconnected otherwise remaining the same during the reaction. Bond disconnection labels can be obtained by comparing molecule graphs of target and reactants. The input of the auxiliary task is the graph-level representation hG = READOUT({hi|1 ≤ i ≤ N}), which is the output of the READOUT operation over all learned node representations. We adopts an arithmetic mean as the READOUT function hG = 1N ∑N i=1 hi and it works well in practice. Similarly, a fully-connected layer parameterized by Ws ∈ R(1+Nmax)×F and a Softmax activation function are applied to hG to predict the total number of disconnected bonds, which is solved as a classification problem here. Each category represents the exact number of disconnected bonds, so there are 1+Nmax classification categories. Nmax is the maximum number of possible disconnected bonds in the retrosynthesis. We denote the Softmax output as q = Softmax(Ws · hG). The total number of disconnected bonds for each target molecule is predicted as: n∗ = argmax n (qn) = argmax n (Softmax(Ws · hG)n), 0 ≤ n ≤ Nmax. (3) The ground truth number of disconnections for molecule k is denoted as Nk, the indicator function 1(i,Nk) is 1 if i equals to Nk otherwise it is 0, and the cross entropy loss for the auxiliary task: LA = 1 K K∑ k=1 CrossEntropy(Nk, q k) = − 1 K K∑ k=1 Nmax∑ i=0 1(i,Nk)log(q k i ). (4) Finally, the overall loss function for the EGAT is LEGAT = LM + αLA, where α is fixed to 1 in our study since we empirically find that α is not a sensitive hype-parameter. Atom and bond features. The atom feature consists of a series of general atom information such as atom type, hybridization, and formal charge, while the bond feature is composed of chemical bond information like bond type and conjugation (see Appendix B for details). These features are similar to those used in [24] which is for chemical property prediction. We compute these features using the open-source toolkit RDKit 4. To fully utilize the provided rich atom-mapping information of the USPTO datasets [19] [25], we add a semi-templates indicator to atom feature. For retrosynthesis dataset with given reaction type, a type indicator is also added to the atom feature. Semi-templates. For atom-mapped USPTO datasets, reaction templates are extracted from reaction data like previous template-based methods [12, 14, 5]. However, we are not interested in full reaction templates since these templates are often too specific. There are as many as 11,647 templates for the USPTO-50K train data [5]. Only the product side of templates are kept instead, which we name as semi-templates. Since reaction templates are closely related to the exact reaction, the semi-templates indicator expected to play a significant role in reaction center identification. The semi-templates can be considered as subgraph patterns within molecules. We build a database of semi-templates from training data and find all appeared semi-templates within each molecule. For each atom, we mark the indicator bits associated with appeared semi-templates. Note that each atom within a molecule may belong to several semi-templates since these semi-templates are not mutually exclusive. Although reaction templates are introduced, our method is still template-free since i) only semi-templates are incorporated and our method does not rely on full templates to plan the retrosynthesis, and ii) our EGAT still works well in the absence of semi-templates, with only slight performance degradation (Appendix D.2). 2.2 Reactant generation network Once the reaction center has been identified, synthons can be obtained by applying bond disconnection to decompose the target graph. Since each synthon is basically a substructure within the reactant, we are informed of the total number of reactants and substructures of these reactants. The remaining task S→ R is much simpler than the original P→ R in which even the number of reactants is unknown. Specifically, task S→ R is to generate the set of desired reactants given obtained synthons. Based on commonsense knowledge of chemical reaction, we propose that the ideal RGN should meet following three requirements: R1) be permutation invariant and generate the same set of reactants no matter the order of synthons, R2) all given information should be considered when generating any reactant, and R3) the generation of each reactant also depends on those previously generated reactants. To fulfill these requirements, we represent molecules in SMILES and formulate S→ R as a sequenceto-sequence prediction problem. We convert synthon graphs to SMILES representations using RDKit, though these synthons may be chemically invalid. As in Figure 3, source sequence is the concatenation of possible reaction types, canonical SMILES of the product, and associated synthons. The target sequence is the desired reactants arranged according to synthons. We approximate the requirement R1 by augmenting train samples with reversely arranged synthons and reactants as shown in Figure 3. Our empirical studies demonstrate that such approximation works pretty well in practice. To satisfy the requirement R2, the encoder-decoder attention mechanism [26] [27] is employed, which allows each position in the target sequence attends to all positions in the source sequence. A similar masked self-attention mechanism [27], which masks future positions in the decoder, is adopted to make the RGN meet the requirement R3. 4https://www.rdkit.org Motivated by the success of Transformer [27] in natural machine translation, we build the RGN based on the Transformer module. Transformer is a sequence-to-sequence model equipped with two types of attention mechanisms: self-attention and encoder-decoder attention [27]. Transformer is also adapted for reaction outcome prediction [28] and retrosynthesis [6], in which both products and reactants are represented in SMILES. We include a brief description of Transformer in Appendix C. Determine the generation order of reactants. For the first time, the generation order of reactants can be determined by aligning reactants in the target with synthons in the source, thanks to intermediate synthons which are associated with reactants uniquely. While the generation order of reactants is undetermined in previous methods [4, 6, 15], which naively treats the sequence-to-sequence model as a black box. The uncertainty of the generation order makes their models hard to train. Robustify the RGN. We find the EGAT suffers from distinguishing multiple coexisting reaction centers, which is the major bottleneck of our method. As a result of the failure of identifying the reaction center, the generated synthons are different from the ground truth. To make our RGN robust enough and able to predict the desired reactants even if the EGAT fails to recognize the reaction center, we further augment RGN training data by including those unsuccessfully predicted synthons on training data. We do not reverse the order of synthons for these augmentation samples like in Figure 3. The intuition behind is that EGAT tends to make similar mistakes on training and test datasets since both datasets follow the same distribution. This method can make our RGN able to correct reaction center prediction error and generate the desired set of reactants. 3 Experiments Dataset and preprocessing. We evaluate our method on USPTO-50K [19] and USPTO-full [25] to verify its effectiveness and scalability. USPTO-50K consists of 50K reactions annotated with 10 reaction types (see appendix A for type distribution), which is derived from USPTO granted patents [29]. It is widely used in previous retrosynthesis work. We adopt the same training/validation/test splits in 8:1:1 as [12, 5]. For RGN training data, we add an extra 28K samples of which synthons are reversed as shown in Figure 3 if there are at least two synthons. There are 68K training samples for RGN, which is still denoted as USPTO-50K in the following content. The USPTO-full consists of 950K cleaned reactions from the USPTO 1976-2016 [25], which has 1,808,937 raw reactions without reaction types. Reactions with multiple products are duplicated into multiple single-product ones. After removing invalid reactions (empty reactant and missing atom mappings) and deduplication, we can obtain 950K reactions 5, which are randomly partitioned into training/validation/test sets in 8:1:1. For the EGAT, we build molecule graphs using DGL [30] and extract atom and bond features with RDkit. By comparing molecule graphs of product and reactants, we can identify disconnection bonds within the product graph and obtain training labels for both main and auxiliary tasks. This comparison can be easily done for atom-mapped reactions. For reactions without atom-mapping, a substructure matching algorithm in RDKit can be utilized to accomplish the comparison. We use RDChiral [31] to extract super general reaction templates, and obtain 1859 semi-templates for USPTO-50K training data. Semi-templates that appear less than twice are filtered and finally 654 semi-templates are obtained. As for the RGN, the product molecule graph is divided into synthon graphs according to the ground truth reaction center, then are converted into SMILES strings. The input sequence of RGN is the concatenation of the possible reaction type, product SMILES string, and synthon SMILES strings as illustrated in Figure 3. Implementation. All reactions are represented in canonical SMILES, which are tokenized with the regular expression in [32]. We use DGL [30] and OpenNMT [33] to implement our EGAT and RGN models, respectively. As for the EGAT, we stack three identical four-head attentive layers of which the hidden dimension is 128. All embedding sizes in EGAT are set to 128, such as F , F ′, and D. The Nmax is set to be two to cover 99.97% training samples. We train the EGAT on USPTO-50K for 80 epochs. EGAT parameters are optimized with Adam [34] with default settings, and the initial learning rate is 0.0005 and it is scheduled to multiply 0.2 every 20 epochs. We train the RGN for 300, 000 time steps, and it takes about 30 hours on two GTX 1080 Ti GPUs. We save a checkpoint of 5Code and processed USPTO-full data are available at https://github.com/uta-smile/RetroXpert RGN parameters every 10, 000 steps and average the last 10 checkpoints as the final model. We run all experiments for three times and report the means of their performance in default. Evaluation metric. The Top-N accuracy is used as the evaluation metric for retrosynthesis. Beam search [35] strategy is adopted to keep top K predictions throughout the reactant generation process. K is set to 50 in all experiments. The generated reactants are represented in canonical SMILES. A correct predicted set of reactants must be exactly the same as the ground truth reactants. 3.1 Reaction center identification results To verify the effectiveness of edge-enhanced attention mechanism, we also include the ablation study by removing edge embedding pi,j when computing the coefficient ci,j = LeakyReLU(aT [zi||zj ]). Results are reported in Table 1. The auxiliary task (Aux) can successfully predict the number of disconnection bonds for 99.2% test molecules given the reaction type (Type) while 86.4% if not given. As for the main task (Main) alone, its prediction accuracy is 74.4% w/ reaction type and 51.5% wo/ reaction type. However, if we adopt the prediction from the auxiliary task as the prior of the number of disconnection bonds, and select the most probable disconnection bonds (EGAT), then the prediction accuracy can be boosted to 86.0% (w/) and 64.9% (wo/), respectively. The edge-enhanced attention (EAtt) can consistently improve the model’s performance in all tasks. The improvement is more significant when the reaction type is unknown, so our EGAT is more practical in real world applications without reaction types. This demonstrates that the reaction type information plays an important role in the retrosynthesis. The reactions of the same type usually share similar reaction patterns (involved atoms, bonds, and functional groups), it is much easier to recognize the reaction center if the reaction type is given as the prior knowledge. We also verify the importance of semi-templates in Appendix D.2. 3.2 Reactant prediction results To robustify the RGN as described in the paragraph Robustify the RGN, we also conduct the P→ S prediction on the EGAT training data for USPTO-50K (40K), and the prediction accuracy is 89.0% for the reaction type conditional setting. We can obtain about 4K unsuccessful synthon predictions as augmentation samples (Aug), adding the original 68K RGN training data, the total RGN training data size is 72K. For the unconditional setting, the EGAT accuracy is 70.0% and there are 12K augmentation samples, and the total RGN training size is 80K in this case. We train RGN models on the USPTO-50K with/without the augmentation (Aug), and report results in Table 2. RGN evaluation For the RGN evaluation, the RGN input consists of the ground truth synthons. Therefore the results in Table 2 indicate the upper bound of our method’s overall retrosynthesis performance. The proposed augmentation strategy does not always improve the upper bound. Without given reaction type, the RGN generally performs worse with the augmentation due to the introduced dirty training samples. However, when given reaction type, this augmentation boosts its prediction accuracy. We presume that it is because the reaction type plays a significant role. The RGN learns to put more attention on the reaction type and product instead of synthons to generate the reactants. Retrosynthesis evaluation To evaluate the overall retrosynthesis prediction accuracy, the generated synthons from P→ S instead of the ground truth are input into the RGN. In this way, we only need to compare the predicted reactants with the ground truth ones, without considering if the reaction center predictions correct or not. We report the retrosynthesis results in Tables 3. Our method RetroXpert achieves impressive performance on the test data. Specifically, when given reaction types, our proposed method achieves 70.4% Top-1 accuracy, which outperforms the SOTA Top-1 accuracy 63.2% [5] by a large margin. Note that our Top-1 accuracy 70.4% is quite close to the upper bound 73.4% in Table 2, which indicates the proposed augmentation strategy in Robustify the RGN is considerably effective. As for results wo/ given reaction type, our model improves the SOTA Top-1 accuracy from 52.6% [5] to 65.6%. To verify the effectiveness of augmentation, we conduct ablation study and report results in Appendix D.3. While our method outperforms in Top-1, Top-3, and Top-5 accuracy, template-based methods GLN [5] and RetroSim [12] are better at Top-20 and Top-50 predictions since they enumerate multiple different reaction templates for each product to increase the hit rate. While our RetroXpert is currently designed to find the best set of reactants. To increase the diversity, we can design new strategies to enumerate multiple reaction centers for each product. This is left as the feature work. We notice that the gap between Top-2 and Top-1 accuracy is around 10%. After investigating these 10% predictions by experienced chemists from the synthetic chemistry perspective, we find about 9/10 these Top-1 predictions are actually reasonable (see Appendix E for details). This indicates that our method can learn general chemical reaction knowledge, which is beyond the given ground truth. 4 Large scale experiments To demonstrate the scalability of our method, we also experiment on the USPTO-full dataset, which consists of 760K training data. We extract 75,129 semi-templates and keep only 3,788 ones that appear at least 10 times. We set Nmax as 5 to cover 99.87% training data. We obtain 1.35M training data after reversing synthons. The final accuracy of the P → S on training set is 60.5%, and there are 0.3M unsuccessful synthon data and the total RGN training data size is 1.65M. We train the RGN for 500,000 time steps on USPTO-full while keeping the other settings the same as those in section 3. We run the official implementation of GLN following their instructions [5], as well as the self-implemented SCROP [6] on the USPTO-full dataset. Experimental results are reported at the bottom of Table 3. Our method again significantly outperforms the SCROP and GLN, which demonstrates that our model scales well to the large real-world dataset. Note that both template-free methods SCROP and RetroXpert outperform the GLN significantly, which may indicate the scalability of template-based methods is very limited. 5 Prediction visualization For EGAT, how the auxiliary task helps to identify the reaction center is illustrated in Figure 4. Note that in the first example the two colored bonds and their surrounding structures are very similar. Current shallow GNNs consider only local information and fails to distinguish the true reaction center. Under the guidance of the auxiliary task, EGAT is able to identify the true reaction center. Figure 5 demonstrates the robustness of our method. Even if the predicted synthons are different from the ground truth, the RGN still successfully generates desired reactants. 6 Discussion One major common limitation of current retrosynthesis work is the lack of reasonable evaluation metrics. There may be multiple valid ways to synthesize a product, while the current evaluation metric considers only the given reaction. More evaluation metrics should be proposed in the future. Broader Impact Our proposed new retrosynthesis method RetroXpert solves the retrosynthesis prediction in two steps like chemists do, and it achieves impressive performance. It is template-free and is very scalable to the large real-world dataset. We believe that our work will greatly inspire and advance related research, such as forward reaction prediction and drug discovery. The researchers and industry experts in drug discovery will benefit most from this research since the retrosynthesis prediction is an important part of drug discovery. We are not aware anyone may be put at disadvantage from this research. Our method does not take advantage of the data bias, it is general and scalable. Acknowledgments and Disclosure of Funding We would like to thank Hanjun Dai for providing the source implementation of GLN. This work was partially supported by US National Science Foundation IIS-1718853, the CAREER grant IIS-1553687 and Cancer Prevention and Research Institute of Texas (CPRIT) award (RP190107).
1. What is the main contribution of the paper in the field of retrosynthetic disconnections? 2. What are the strengths of the proposed approach, particularly in its description, evaluation, and motivation? 3. How does the reviewer assess the novelty and relevance of the paper's content to the NIPS community? 4. Are there any concerns or suggestions regarding the paper's title and abstract?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a sensible two stage model to predict retrosynthetic disconnections. First, it predicts which bonds need to be broken to obtain what is called synthons, then it completes the synthons by added the required functional groups. Performance is measured on the standard dataset, and very good performance is achieved. Strengths + good description + thorough evaluation, with extra experiments in appendix! + good results + well motivated model + novelty in the way the problem is approached + relevance to neurips community, because tasks that operate on graphs and change their structure can be found in many domains, not just chemistry Weaknesses - the title. I would suggest to be more modest. For example, writing "automating the procedure that chemists used to do" in the abstract sounds like automated retrosynthesis is now a solved problem and chemists are not needed anymore. This is very far away in the future.
NIPS
Title Benign Underfitting of Stochastic Gradient Descent Abstract We study to what extent may stochastic gradient descent (SGD) be understood as a “conventional” learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, without-replacement) SGD is classically known to minimize the population risk at rate O (1/ √ n), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. N/A √ 𝑛), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. 1 Introduction Conventional wisdom in statistical learning revolves around what is traditionally known as the bias-variance dilemma; the classical theory stipulates the quality of fit to the training data be in a trade-off with model complexity, aiming for a sweet spot where training error is small but yet representative of performance on independent test data. This perspective is reflected in the vast majority of generalization bound techniques offered by contemporary learning theory. Uniform convergence approaches [36, 4] seek capacity control over the model function class, and employ uniform laws of large numbers to argue convergence of sample averages to their respective expectations. Algorithmic stability [9, 32] on the other hand, builds on controlling sensitivity of the learning algorithm to small changes in its input, and provides algorithm dependent bounds. Nevertheless, despite the conceptual and technical differences between these two methods, both ultimately produce risk bounds by controlling the training error, and the generalization gap. The same is true for many other techniques, including sample compression [17, 2], PAC-Bayes [18, 12], and information theoretic generalization bounds [29, 37, 24], to name a few. In recent years it has become clear there are other, substantially different, ways to manage the fit vs. complexity trade-off, that are in a sense incompatible with traditional generalization bound techniques. Evidently, heavily over-parameterized deep neural networks may be trained to perfectly 36th Conference on Neural Information Processing Systems (NeurIPS 2022). fit training data and generalize well nonetheless [38, 25, 26], thus seemingly disobeying conventional statistical wisdom. This phenomenon has garnered significant attention, with a flurry of research works dedicated to developing new techniques that would be able to explain strong generalization performance of algorithms in this so called interpolation regime (see 6, 8 and references therein). Notably, while these algorithms do not strike a balance between model complexity and fit to the data in the traditional sense, fundamentally, they still minimize the empirical risk as a proxy to test performance. To summarize, in the classical and modern regimes alike, learning methods are thought of as minimizing some combination of the training error and generalization gap, with reasoning that relies in one way or another on the following trivial, yet arguably most profound, bound: test-error ≤ train-error + |generalization gap| . (1) In this work, we focus on stochastic gradient descent (SGD)—the canonical algorithm for training machine learning models nowadays—and ask whether its generalization performance can be understood through a similar lens. We consider the fundamental stochastic convex optimization (SCO) framework, in which it is well known that SGD minimizes the population risk at a rate of 𝑂 (1/ √ 𝑛) [23]. Remarkably, the classical analysis targets the population risk directly, and in contrast with other generalization arguments, at least seemingly does not rely on the above bound. This highlights an intriguing question: Are these quantities, so fundamental to learning theory, relevant to the way that SGD “works”? Put differently, is it possible to provide a more “conventional" analysis of SGD that conforms with (1)? Our main result shows that, perhaps surprisingly, there exist convex learning problems where the above bound becomes vacuous for SGD: namely, SGD minimizes the population risk, but at the same time, it does not minimize the empirical risk and thus exhibits constant generalization gap. This accords neither with the traditional viewpoint nor with that of interpolation, as both recognize the empirical risk as the principal minimization objective. We refer to this phenomenon as benign underfitting: evidently, SGD underfits the training data, but its classical analysis affirms this underfitting to be benign, in the sense that test performance is never compromised as a result. Our construction presents a learning problem where the output of SGD with step size η over 𝑛 i.i.d. training examples isΩ(η √ 𝑛) sub-optimal w.r.t. the best fit possible, and consequently has a generalization gap of the same order. Notably, with the standard step size choice of 1/ √ 𝑛 necessary to ensure the population risk converges at the optimal rate this lower bound amounts to a constant. Many previously plausible explanations for generalization properties of this algorithm are thereby rendered inadequate, at least in the elementary convex setup we consider here. First, it is clear that SGD cannot be framed as any reasonable regularized empirical risk minimization procedure for the simple reason that it does not minimize the empirical risk, which challenges the implicit regularization viewpoint to the generalization of SGD. Second, any attempt to explain generalization of SGD by uniform convergence over any (possibly data-dependent) hypotheses set cannot hold, simply because the sample average associated with the very same training set SGD was trained on is not necessarily close to its respective expectation. Finally, as it turns out, SGD provides for a strikingly natural example of an algorithm that generalizes well but is not stable in any sense, as the most general notion of algorithmic stability is entirely equivalent to the generalization gap [32]. We then move on to study the generalization gap and empirical risk guarantees of SGD in a broader context. We study the case of non-convex and strongly convex component functions, and present natural extensions of our basic result. In addition, we analyse the variant of SGD where datapoints are sampled with-replacement from the training set, in which case the train error is of course low but perhaps surprisingly the population risk is well behaved. Finally, we make the natural connection to the study of without-replacement SGD for empirical risk minimization, and derive upper and lower bounds for the multi-epoch regime. These last two points are discussed in further detail in the following. With vs without-replacement SGD. We may view one-pass SGD as processing the data via without-replacement sampling from the training set, as randomly reshuffling the examples does not change their unconditional distribution. Thus, it is interesting to consider the generalization gap of the closely related algorithm given by running SGD over examples sampled with-replacement from the training set. Considering instability (see the supplementary for a detailed discussion) of SGD for non-smooth losses and the fact that this variant targets the empirical objective, a priori it would seem this algorithm would overfit the training set and not provide strong population risk guarantees. Surprisingly, our analysis presented in Section 4 reveals this is not the case, and that with a certain iterate averaging scheme the population risk converges at the optimal rate. Consequently, it turns out the generalization gap is well bounded, and therefore that this variant constitutes a natural learning rule that is not stable in any sense but the most general one. Without-replacement SGD for empirical risk minimization. The example featured in our main construction implies a lower bound of Ω(𝑛−1/4) on the convergence rate of a single epoch of withoutreplacement SGD for finite sum optimization problems. In this setting, we have a set of 𝑛 convex losses and we wish to minimize their sum by running SGD over random shufflings of the losses. While the smooth case has been studied extensively (e.g., [28, 27, 20, 31]), the non-smooth case has hardly received much attention. In Section 5 we extend our basic construction to a lower bound for the multi-epoch regime, and complement it with nearly matching upper bounds. Our techniques. Fundamentally, we exploit the fact that dimension independent uniform convergence does not hold in SCO [32]. This is a prerequisite to any attempt at separating train and test losses of any hypothesis vector, let alone that produced by SGD. Another essential condition is the instability of SGD for non-smooth losses, as any form of stability would immediately imply a generalization gap upper bound regardless of uniform convergence. Our main lower bound draws inspiration from constructions presented in the works of [7] and [1], both of which rely on instability, the latter also exploiting failure of uniform convergence. However, neither of these contains the main ideas necessary to provoke the optimization dynamics required in our example. A crucial ingredient in our construction consists of encoding into the SGD iterate information about previous training examples. This, combined with careful design of the loss function, gradient oracle and population distribution, allows correlating sub-gradients of independent training examples, and in turn guiding the SGD iterates to ascend the empirical risk. 1.1 Summary of main contributions To summarize, the main contributions of the paper are as follows: • One-pass SGD in SCO. In Section 3, we study the basic SCO setup where the component losses are assumed to be individually convex, and present a construction where the expected empirical risk and therefore the generalization gap are both Ω(η √ 𝑛). We also provide extensions of our main construction demonstrating; – SCO with non-convex component functions may exhibit cases of benign overfitting, where 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤) ] = Ω(η2𝑛). – In SCO with λ-strongly convex losses the worst case generalization gap is Ω(1/λ √ 𝑛) for the standard step size choice. • With vs without replacement SGD in SCO. In Section 4, we prove the variant of SGD where the training examples are processed via sampling with-replacement from the training set minimizes the population risk at the optimal rate, and thus enjoys a generalization gap upper bound bound of 𝑂 (1/ √ 𝑛). • Multi-epoch without-replacement SGD. In Section 5, we study convergence rates of withoutreplacement SGD for finite sum convex optimization problems. We prove a lower bound of Ω(𝑛−1/4𝐾−3/4) on the optimization error after 𝐾 epochs over 𝑛 convex losses, and complement with upper bounds of 𝑂 (𝑛−1/4𝐾−1/2) and 𝑂 (𝑛−1/4𝐾−1/4) for respectively the multi-shuffle and single-shuffle SGD variants. 1.2 Additional related work Gradient descent, algorithmic stability and generalization. Closely related to our work is the study of stability properties of SGD. For smooth losses, [14] provide upper bounds on the generalization gap by appealing to uniform stability, yielding an 𝑂 (1/ √ 𝑛) rate for a single epoch of 𝑛 convex losses and the standard step size choice. In a later work, [7] prove tight rates for uniform stability of SGD in the setting of non-smooth losses, establishing these scale substantially worse; Θ(η √ 𝑛) for step size η and 𝑛 training examples. Our work shows that in fact the worst case rate of the generalization gap completely coincides with the uniform stability rate of SGD. A number of works prior to ours studied the extent to which SGD can be explained by implicit regularization in SCO. [16] study the setup where losses are smooth but only required to be convex in expectation, and show SGD may successfully learn when regularized ERM does not. Prior to their work, [11] also rule out a wide range of implicit regularization based explanations of SGD in the basic SCO setup with convex losses. On a more general level, our work is related to the study of stability and generalization in modern learning theory, pioneered by [9, 32]. In particular, the failure of (dimension independent) uniform convergence in SCO was established in [32]. The work of [13] improves the dimension dependence in the construction of [32] from exponential to linear in the number of training examples. Notably, the construction featured in our main result requires the dimension to be exponential in the sample size, however the techniques of [13] do not readily extend to our setting. Thus, the optimal dimension dependence for a generalization gap lower bound is left for future work. Without-replacement SGD for empirical risk minimization. A relatively long line of work studies convergence properties of without-replacement SGD from a pure optimization perspective (e.g., [28, 20, 30, 27, 19, 31]). Nearly all the papers in this line of work adopt the smoothness assumption, with near optimal bounds established by [20]. An exception is the paper of [33] where an 𝑂 (1/ √ 𝑛𝐾) upper bound is obtained for 𝑛 datapoints and 𝐾 epochs, albeit only for generalized linear models over a bounded domain — notably, a setting where uniform convergence holds. Prior to this thread of research, [22] prove a convergence rate of 𝑂 (𝑛/ √ 𝐾) for non-smooth loss functions that applies for any ordering of the losses. To the best of our knowledge, this is also the state-of-the-art result for without-replacement SGD in the non-smooth setting without further assumptions on the loss functions. Benign overfitting vs. benign underfitting. While both benign underfitting and benign overfitting challenge traditional generalization techniques, that postulate the training error to represent the test error, as we discuss above these two phenomena point to very different regimes of learning. In particular, [34] shows that benign overfitting requires distributional assumptions for the interpolating algorithm to succeed. In contrast, we show that benign underfitting happens for SGD in a setting where it provably learns (namely, SCO), without any distributional assumptions. We also point out that Corollary 1 shows benign overfitting cannot happen in the setup we consider, hence the two phenomena seem to rise in different setups. Explaining generalization of interpolators. As already discussed, there is a large recent body of work dedicated to understanding why over-parameterized models trained by SGD to zero training error generalize well [6, 8, and references therein]. In particular, the work of [5] aims at explaining the phenomenon for high dimensional linear models. Some recent papers investigate limitations of certain techniques in explaining generalization of interpolating algorithms: [21] show uniform convergence fails to explain generalization of SGD in a setup where the generalization gap is in fact well bounded, thus in sharp contrast to our work; [3] rule out the possibility of a large class of excess risk bounds to explain generalization of minimum norm interpolants. Unlike our work, they study properties of possible risk bounds when benign overfitting occurs, and thus do not pertain to SGD that never benignly overfits in SCO. 2 Preliminaries We consider stochastic convex optimization (SCO) specified by a population distribution Z over a datapoint set 𝑍 , and loss function 𝑓 : 𝑊 × 𝑍 → ℝ where𝑊 ⊂ ℝ𝑑 is convex and compact. We denote 𝐹 (𝑤) B 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧), (population loss) 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖), (empirical loss) where {𝑧1, . . . , 𝑧𝑛} ⊆ 𝑍 stands for the training set, which we regularly denote by 𝑆. We let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the population minimizer, and 𝑤★𝑆 B min𝑤∈𝑊 𝐹 (𝑤) denote the empirical risk minimizer (ERM). The diameter of𝑊 is defined by max𝑥,𝑦∈𝑊 {∥𝑥 − 𝑦∥} where ∥·∥ denotes the euclidean norm, and B𝑑0 (1) B { 𝑥 ∈ ℝ𝑑 | ∥𝑥∥ ≤ 1 } denotes the 𝐿2 unit ball in ℝ𝑑 . Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛 and a learning algorithm that outputs a hypothesis 𝑤𝑆 , we define the generalization gap to be the absolute value of the expected difference between test and train losses; 𝔼𝑆∼Z𝑛 [𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆)] . (generalization gap) Throughout most of the paper, we consider one-pass projected SGD over 𝑆; initialize at 𝑤1 ∈ 𝑊 ; for 𝑡 = 2, . . . , 𝑛 : 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η𝑔𝑡 ) , with 𝑔𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ), where 𝜕 𝑓 (𝑤; 𝑧) denotes the set of sub-gradients of 𝑓 (·; 𝑧) → ℝ at the point𝑤 ∈ 𝑊 , andΠ𝑊 : ℝ𝑑 → 𝑊 the projection operation onto𝑊 . 3 A generalization gap lower bound for SGD In this section, we establish our main result; that there exist convex learning problems where SGD incurs a large optimization error and therefore also a large generalization gap. When losses are convex these two quantities are closely related since in expectation, the empirical risk minimizer cannot significantly outperform the population minimizer (a claim that will be made rigorous shortly after our main theorem). Our construction builds on losses that are highly non-smooth, leading to SGD taking gradient steps that actually ascend the empirical objective. Theorem 1. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, and 𝑊 = B2𝑑0 (1). Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ such that running SGD initialized at 𝑤1 = 0, with step size η > 0 over 𝑆 ∼ Z𝑛 yields; (i) a large optimization error; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , (ii) a large generalization gap; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , where 𝑤𝑆 is any suffix average of the iterates. In particular, for η = Θ(1/ √ 𝑛), the population risk is 𝔼 [𝐹 (𝑤𝑆) − 𝐹 (𝑤★)] = 𝑂 (1/ √ 𝑛), while the generalization gap and training error are both Ω (1) . A detailed proof of Theorem 1 is deferred to the supplementary; in the following we provide an informal overview containing its principal ingredients. Proof sketch. Let 𝑍 B {0, 1}𝑑 , and consider a population distribution Z such that 𝑧(𝑖) = 1 with probability δ. We will use a loss function of the form 𝑓 (𝑤; 𝑧) B ∥𝑧 ⊙ 𝑤∥ + φ(𝑤; 𝑧), where ⊙ denotes element-wise product. The high level idea is that the norm component penalizes 𝑤’s that correlate with the given sample point 𝑧, and the φ function (the details of which are left for the supplementary) is tailored so that it drives the SGD iterates precisely to those areas in the 𝐿2 ball where it correlates with the training set {𝑧1, . . . , 𝑧𝑛}. In addition, the choice of parameters is such that the population loss is approximately zero over the entire domain. Taking 𝑑 sufficiently large compared to δ−1, we ensure that w.h.p., for every round 𝑡 ∈ [𝑛] there exist many coordinates 𝑖 ∈ [𝑑] with a prefix of ones; 𝑧1 (𝑖) = · · · = 𝑧𝑡−1 (𝑖) = 1 . With δ chosen sufficiently small compared to 𝑛, we ensure that as long as 𝑖 ∈ [𝑑] is any coordinate chosen independently of {𝑧𝑡+1, . . . , 𝑧𝑛}, w.h.p. this coordinate will have a suffix of zeros; 𝑧𝑡+1 (𝑖) = · · · = 𝑧𝑛 (𝑖) = 0. Our goal is to make SGD take steps 𝑤𝑡+1 ≈ 𝑤𝑡 − η𝑒𝑖𝑡 (where 𝑒𝑖 denotes the 𝑖’th standard basis vector) where 𝑖𝑡 ∈ [𝑑] is a coordinate with the aforementioned property of having a prefix of ones followed by a suffix of zeros. Note that since these steps are taken after the prefix of ones has ended, they will inflict large empirical loss from the norm component, but will not be “corrected” by future steps owed to the suffix of zeros. To achieve this, we design φ so that it encodes the relevant information into the SGD iterates. Specifically, φ “flags” (using some extra dimensions) all coordinates 𝑖 ∈ [𝑑] where a prefix of ones has been encountered. In addition, using another max component in φ we have that for all such coordinates 𝑖, 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧) for any example 𝑧 (as this component in the loss depends only on the iterate 𝑤𝑡 ). In particular, we get that 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ). Then, our gradient oracle just returns a subgradient pointing towards one of these coordinates (for convenience, we use the minimal one) which we denote by 𝑖𝑡 , and SGD makes the desired step. Notably, the coordinate 𝑖𝑡 chosen by the subgradient oracle is independent of future examples, and therefore will have a suffix of zeros w.h.p. Hence, as mentioned, this ensures no gradient signal after round 𝑡 will be able to correct the empirical risk ascent on 𝑖𝑡 . Concluding, we have that for the final iterate 𝑤 B 𝑤𝑛+1, we get 𝑤(𝑖𝑡 ) = −η for all 𝑡 ∈ [𝑛], therefore 𝐹 (𝑤) = 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖) ≈ 1 𝑛 𝑛∑︁ 𝑖=1 ∥𝑧𝑖 ⊙ 𝑤∥ ≈ ∥𝑤∥ ≈ √︃ η2𝑛 = η √ 𝑛. A similar argument requiring a few more technical steps shows the same is true for any suffix average 𝑤. Noting that 𝐹 (0) = 0, we get that the optimization error is Ω(η √ 𝑛). The implication for the generalization gap follows immediately with the standard step size choice of η = 1/ √ 𝑛, owed to SGD’s population risk convergence guarantee. For an arbitrary step size, the result follows from a simple computation, and the proof is concluded. □ The magnitude of the generalization gap featured in Theorem 1 stems from the large optimization error, which results in the empirical risk over-estimating the population risk by a large margin. Evidently, for convex losses the converse is always false; the empirical risk will never significantly under-estimate the population risk (a fact that will turn out false when losses are only required to be convex in expectation — see Section 3.1). Indeed, stability of the regularized ERM solution implies the ERM does not perform significantly better on the training set compared to the population minimizer 𝑤★. Lemma 1. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ convex and 𝐺-Lipschitz in the first argument. Then 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★ 𝑆 ) ] ≤ 4𝐺𝐷√ 𝑛 . Proof. Denote the regularized ERM by 𝑤λ 𝑆 B arg min𝑤∈𝑊 { 1 𝑛 ∑𝑛 𝑖=1 𝑓𝑖 (𝑤; 𝑧𝑖) + λ2 ∥𝑤∥ 2} . Observe, 𝐹 (𝑤★) ≤ 𝔼𝐹 (𝑤λ𝑆) ≤ 𝔼𝐹 (𝑤 λ 𝑆) + 4𝐺2 λ𝑛 ≤ 𝔼𝐹 (𝑤★𝑆) + λ 2 𝐷2 + 4𝐺 2 λ𝑛 , where the second inequality follows from stability of the regularized ERM (see Lemma 13). Choosing λ B 2𝐺𝐷/ √ 𝑛, we get that 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] = 𝐹 (𝑤★) − 𝔼𝐹 (𝑤★𝑆) ≤ 4𝐺𝐷 √ 𝑛 , as claimed. □ Since the optimization error is always positive, we see that the upper bound given by Lemma 1 implies an upper bound on the difference between the population and empirical risks. Corollary 1. For any distribution Z over 𝑍 and Lipschitz loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex in the first argument, running SGD with step size η B 1/ √ 𝑛 guarantees 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] ≤ 𝑂 (1/ √ 𝑛). Proof. We have, 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★) ] + 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] The population error term on the RHS is 𝑂 (1/ √ 𝑛) by the classical analysis of SGD. The second term is bounded by Lemma 1; 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] ≤ 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] ≤ 4𝐺𝐷/ √ 𝑛, and the result follows. □ In the subsections that follow we continue to study the generalization gap in the context of common variants to the basic SCO setup. 3.1 SCO with non-convex components When we relax the convexity assumption and only require the losses to be convex in expectation, we can construct a learning problem where SGD exhibits a case of benign overfitting. In contrast to Theorem 1, here we actually drive the SGD iterates towards an ERM solution, thus achieving a low optimization error and an empirical risk that under-estimates the population risk. Theorem 2. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and η ≤ 1/ √ 𝑛. Then there exists a distribution Z over 𝑍 and a 4-Lipschitz loss 𝑓 : 𝑊 × 𝑍 → ℝ where 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧) is convex in 𝑤, such that for any suffix average 𝑤 of SGD initialized at 𝑤1 = 0, with step size η; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω(η2𝑛). The construction and proof of Theorem 2 given in the supplementary follow a methodology similar to that of Theorem 1. Here however, we exploit non convex losses to form an empirical loss landscape where the ERM solution significantly outperforms the population minimizer 𝑤★ (notably, a feat not possible when losses are individually convex, by Corollary 1). Our loss function is defined by 𝑓 (𝑤; 𝑧) B ∑𝑑 𝑖=1 𝑧(𝑖)𝑤(𝑖)2 + φ(𝑤; 𝑧), with each component playing a similar role as before. We work with the distribution 𝑧 ∼ {0, 1}𝑑 where 𝑧(𝑖) = 1 w.p. δ, 𝑧(𝑖) = −1 w.p. δ, and 𝑧(𝑖) = 0 w.p. 1 − 2δ. The intuition is that coordinates accumulating many −1’s offer regions in the 𝐿2 ball where the empirical risk is “too good” compared to the population risk. We tailor the extra dimensions and φ in coordination with the −1 values so that the sub-gradients guide the SGD iterates towards these regions, in exactly the same manner the construction of Theorem 1 drives the iterates to high loss regions. We note that while the statement of Theorem 2 is specialized to step size smaller than 1/ √ 𝑛, it may be extended to any step size using arguments similar to those given in the proof of Theorem 1. 3.2 SCO with strongly convex components Our basic construction extends to the strongly convex case by making only technical modification to Theorem 1. The theorem below concerns the standard step size choice for strongly convex objectives. We provide its proof in the supplementary. Theorem 3. Let 𝑛 ∈ ℕ, 𝑛 ≥ 10, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and λ ≥ 1/ √ 𝑛. Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz, λ-strongly convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ (i) the optimization error is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( 1 λ √ 𝑛 ) , (ii) the generalization gap is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( 1 λ √ 𝑛 ) , where 𝑤𝑆 is any suffix average of SGD initialized at 𝑤1 = 0, with step size schedule η𝑡 = 1/λ𝑡. Furthermore, the problem instance where this occurs is precisely the λ regularized version of the example featured in Theorem 1. We note that an immediate implication of the above theorem is that if we seek a generalization gap upper bound for a weakly convex problem by means of regularization (meaning, by running SGD on a regularized problem), we would have to take λ ≥ 1 to guarantee a gap of 𝑂 (1/ √ 𝑛). To see this, note that the generalization gap (of any hypothesis) of the regularized problem is the same as that of the original. On the other hand, taking λ ≥ 1 will of course be detrimental to the population error guarantee. Hence, one cannot circumvent the generalization gap lower bound by regularization without compromising the population error. We conclude this section with a note regarding stability rates of SGD in non-smooth SCO. Implicit in Theorem 1, is that average stability of SGD coincides with the tight uniform stability rate of Θ(η √ 𝑛) established by [7]. This is because Theorem 1 provides the Ω(η √ 𝑛) lower bound on the most general stability notion, which is precisely the generalization gap [32]. We refer the reader to the supplementary for a more elaborate discussion. 4 SGD with vs without replacement In this section, we consider a different algorithm in the context of the basic SCO setup; SGD over examples drawn with-replacement from the training set. This is not to be confused with one-pass SGD discussed in Section 3, which corresponds to without-replacement SGD on the training set, or alternatively with-replacement SGD over the population distribution. Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛, we define with-replacement projected SGD initialized at 𝑤1 ∈ 𝑊 by 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η̂𝑡 ) , where ̂𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; ̂𝑡 ) and ̂𝑡 ∼ Unif (𝑆). Perhaps surprisingly, this version of SGD does not overfit the training data; our theorem below establishes that with proper iterate averaging, the population risk converges at the optimal rate. Theorem 4. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z be any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ be convex and 𝐺-Lipschitz in the first argument. Let 𝑆 ∼ Z𝑛 be a training set of 𝑛 ∈ ℕ datapoints drawn i.i.d. from Z, and consider running SGD over training examples sampled with-replacement, uniformly and independently from 𝑆. Then, for step size η = 𝐷 𝐺 √ 𝑛 and 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 , the following upper bound holds; 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷√ 𝑛 . Proof. Fix a time-step 𝑡 ∈ [𝑛], and observe that if we don’t condition on 𝑆, we may view the random datapoint ̂𝑡 as a mixture between a fresh i.i.d. sample from the population and a uniformly distributed sample from the previously processed datapoints 𝑆𝑡−1 B {̂1, . . . , ̂𝑡−1}; ̂𝑡 | 𝑆𝑡−1 = { 𝑧 ∼ Z w.p. 1 − 𝑡−1 𝑛 , 𝑧 ∼ Unif (𝑆𝑡−1) w.p. 𝑡−1𝑛 . With this in mind, denote ̂𝑡 (𝑤) B 𝑓 (𝑤; ̂𝑡 ), fix 𝑆𝑡−1 and observe: 𝔼̂𝑡 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) | 𝑆𝑡−1 ] = ( 1 − 𝑡 − 1 𝑛 ) 𝔼𝑧∼Z [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] + 𝑡 − 1 𝑛 1 𝑡 − 1 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤𝑡 ) − ̂𝑖 (𝑤★). Rearranging and taking expectation with respect to 𝑆𝑡−1 we obtain( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] = 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 𝔼 [ 1 𝑛 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤★) − ̂𝑖 (𝑤𝑡 ) ] ≤ 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 4𝐺𝐷 √ 𝑡 𝑛 , (2) where the inequality follows from Lemma 1. Now, by a direct computation we have ∑𝑛 𝑡=1 ( 1 − 𝑡−1 𝑛 ) = 𝑛+1 2 , which motivates setting 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 . By convexity of 𝐹, Eq. (2), and the standard regret analysis of gradient descent [e.g., 15] we now have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 ( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝐹 (𝑤𝑡 ) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 2 𝑛 + 1 𝑛∑︁ 𝑡=1 4𝐺𝐷 √ 𝑡 𝑛 ≤ 2 𝑛 𝔼 [ 𝑛∑︁ 𝑡=1 ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 8𝐺𝐷√ 𝑛 ≤ 2 𝑛 ( 𝐷2 2η + η𝐺 2 2 ) + 8𝐺𝐷√ 𝑛 = 10𝐺𝐷 √ 𝑛 , where the last inequality follows by our choice of η = 𝐷 𝐺 √ 𝑛 . □ Evidently, the averaging scheme dictated by Theorem 4 does little to hurt the empirical risk convergence guarantee, which follows from the standard analysis with little modifications (for completeness we provide a formal statement and proof in the supplementary). Combined with Lemma 1, this immediately implies a generalization gap upper bound for with-replacement SGD. Notably, this shows with-replacement SGD provides for an example of a (natural) algorithm in the SCO learning setup that is not even stable on-average, but nonetheless has a well bounded generalization gap. We refer the reader to the discussion in the supplementary for more details. Corollary 2. For any distribution Z and loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex and Lipschitz in the first argument, running SGD with step size and averaging as specified in Theorem 4 ensures 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝑂 (1/√𝑛). Proof. We have; 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤★)] + 𝔼[𝐹 (𝑤★) − 𝐹 (𝑤★𝑆)] + 𝔼[𝐹 (𝑤★𝑆) − 𝐹 (𝑤)] . The first term is upper bounded by convergence of the population risk provided by Theorem 4, the second by Lemma 1, and the third by the standard analysis of SGD (see the supplementary). □ 5 Multi-epoch SGD for empirical risk minimization In this section, we forgo the existence of a population distribution and discuss convergence properties of without-replacement SGD (wor-SGD) for finite sum optimization problems. A relatively long line of work discussed in the introduction studies this problem in the smooth case. The work of [20] noted smoothness is a necessary assumption to obtain rates that are strictly better than the 𝑂 (1/ √ 𝑛𝐾) guaranteed by with-replacement SGD for 𝑛 losses and 𝐾 epochs, due to a lower bound that follows from the deterministic case (e.g., [10]). Here we establish that smoothness is in fact necessary to obtain rates that are not strictly worse than with-replacement SGD. We consider running multiple passes of wor-SGD to solve the finite sum optimization problem given by the objective 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑡=1 𝑓 (𝑤; 𝑡) (3) where { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 is a set of 𝑛 convex, 𝐺-Lipschitz losses defined over a convex and compact domain 𝑊 ⊆ ℝ𝑑 . Throughout this section we let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the minimizer of the objective Eq. (3). In every epoch 𝑘 ∈ [𝐾] we process the losses in the order specified by a permutation π𝑘 : [𝑛] ↔ [𝑛] sampled uniformly at random, either once in the beginning of the algorithm (single-shuffle), or at the onset of every epoch (multi-shuffle). Multi-epoch wor-SGD initialized at 𝑤11 ∈ 𝑊 is specified by the following equations; 𝑤𝑘𝑡+1 ← Π𝑊 (𝑤 𝑘 𝑡 − η𝑔𝑘𝑡 ), where 𝑔𝑘𝑡 ∈ 𝜕 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) 𝑤𝑘+11 B 𝑤 𝑘 𝑛+1, where we denote 𝑓 𝑘𝑡 (𝑤) B 𝑓 (𝑤;π𝑘 (𝑡)). A near-immediate implication of Theorem 1 is that there exists a set of convex losses on which a single epoch of wor-SGD cannot converge at a rate faster than 1/𝑛1/4. Theorem 5 presented below extends our basic construction from Theorem 1 to accommodate multiple epochs. The main challenge here is in devising a mechanism that will allow fresh bad gradient steps to take place on every new epoch. Theorem 5. Let 𝑛, 𝐾 ∈ ℕ, 𝐾 ≥ 4, 𝑛 ≥ 4, 𝑐 B 4/(21/𝐾 − 1), 𝑑 ≥ 26𝑛 log(𝑐𝑛𝐾) , and 𝑊 = B𝑑′0 (1) where 𝑑 ′ = (𝑛𝐾 + 1)𝑑. Then there exists a set of 𝑛 convex, 4-Lipschitz losses such that after 𝐾 epochs of either multi-shuffle or single-shuffle SGD initialized at 𝑤11 = 0 with step size η ≤ 1/ √ 2𝑛𝐾 , it holds that 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤∗)] = Ω ( min { 1, η √︂ 𝑛 𝐽 + 1 η𝑛𝐾 + η }) , where 𝑤 is any suffix average of the last 𝐽 epochs. In particular, we obtain a bound of Ω ( 𝑛−1/4𝐾−3/4 ) for any suffix average and any choice of η. The proof of Theorem 5 is provided in the supplementary. The construction in the proof takes the idea that the training set can be encoded in the SGD iterate to the extreme. The loss function and gradient oracle are designed in such a way so as to record the training examples in their full form and order into the iterate. We then exploit this encoded information with an “adversarial” gradient oracle that returns the bad sub-gradients on each gradient step in every new epoch. Next, we complement Theorem 5 with an upper bound that builds on stability arguments similar to those of the smooth case [20]. Importantly though, lack of smoothness means worse stability rates and necessitates extra care in the technical arguments. Below, we prove the multi-shuffle case, and defer the full details for the single-shuffle case to the supplementary. Theorem 6. Let 𝑆 = { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 be a set of 𝑛 convex, 𝐺-Lipschitz losses over a convex and compact domain𝑊 ⊆ ℝ𝑑 of diameter 𝐷, and consider running 𝐾 ≥ 1 epochs of wor-SGD over 𝑆. Then, we have the following guarantees: (i) For multi-shuffle, with step-size η = 𝐷/(𝐺𝑛3/4𝐾1/2), we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 . (ii) For single-shuffle, with step-size η = 𝐷/(2𝐺𝑛3/4𝐾3/4) and assuming 𝐾 ≥ 𝑛, we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷 𝑛1/4𝐾1/4 . In both of the above bounds, 𝑤 = 1 𝑛𝐾 ∑ 𝑘∈[𝐾 ],𝑡 ∈[𝑛] 𝑤 𝑘 𝑡 , and the expectation is over the random permutations of losses. Proof (multi-shuffle case). Observe; 𝐹 (𝑤) − 𝐹 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝐹 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 𝐷2 2η𝑛𝐾 + η𝐺 2 2 , with the last inequality following from the standard 𝑛𝐾 round regret bound for gradient descent [see e.g., 15]. To bound the other term, using Lemma 10, we relate the difference between the without-replacement loss distribution and the full batch objective to the uniform stability rate of SGD, which may then be bounded by applying Lemma 11: 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 )) ] = 𝔼π1 ,...,π𝑘−1𝔼π𝑘 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) | 𝑤𝑘1 ] ≤ 𝔼π1 ,...,π𝑘−1 [ 𝐺ϵSGDstab (𝑡 − 1) ] = 𝐺ϵSGDstab (𝑡 − 1) ≤ 2η𝐺2 √ 𝑡. Concluding, we have that 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) ] + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 η𝐺2 √ 𝑡 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2η𝐺2 √ 𝑛 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 , where the last inequality follows from our choice of η = 𝐷/(𝐺𝑛3/4𝐾1/2). □ Acknowledgements and funding disclosure This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 882396), by the Israel Science Foundation (grants number 993/17, 2549/19, 2188/20), by the Len Blavatnik and the Blavatnik Family foundation, by the Yandex Initiative in Machine Learning at Tel Aviv University, by a grant from the Tel Aviv University Center for AI and Data Science (TAD), and by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the focus of the paper in terms of machine learning algorithms? 2. What are the contributions of the paper regarding the convergence of SGD? 3. Are there any limitations or concerns regarding the applicability of the results to real-world machine learning problems?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors study SGD, with and without replacement, and construct counterexamples for which SGD fails to converge despite the objective being otherwise well-behaved (e.g. convex and smooth). Strengths And Weaknesses This appears to be a significant work but is inappropriate for the NeurIPS community because it is too theoretical. The dataset Z and the objective function (bottom pg. 5) are simple but it is unclear how this result relates to real-world machine learning problems. This is evidenced by the lack of experiments in the paper. Questions None Limitations N/A
NIPS
Title Benign Underfitting of Stochastic Gradient Descent Abstract We study to what extent may stochastic gradient descent (SGD) be understood as a “conventional” learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, without-replacement) SGD is classically known to minimize the population risk at rate O (1/ √ n), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. N/A √ 𝑛), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. 1 Introduction Conventional wisdom in statistical learning revolves around what is traditionally known as the bias-variance dilemma; the classical theory stipulates the quality of fit to the training data be in a trade-off with model complexity, aiming for a sweet spot where training error is small but yet representative of performance on independent test data. This perspective is reflected in the vast majority of generalization bound techniques offered by contemporary learning theory. Uniform convergence approaches [36, 4] seek capacity control over the model function class, and employ uniform laws of large numbers to argue convergence of sample averages to their respective expectations. Algorithmic stability [9, 32] on the other hand, builds on controlling sensitivity of the learning algorithm to small changes in its input, and provides algorithm dependent bounds. Nevertheless, despite the conceptual and technical differences between these two methods, both ultimately produce risk bounds by controlling the training error, and the generalization gap. The same is true for many other techniques, including sample compression [17, 2], PAC-Bayes [18, 12], and information theoretic generalization bounds [29, 37, 24], to name a few. In recent years it has become clear there are other, substantially different, ways to manage the fit vs. complexity trade-off, that are in a sense incompatible with traditional generalization bound techniques. Evidently, heavily over-parameterized deep neural networks may be trained to perfectly 36th Conference on Neural Information Processing Systems (NeurIPS 2022). fit training data and generalize well nonetheless [38, 25, 26], thus seemingly disobeying conventional statistical wisdom. This phenomenon has garnered significant attention, with a flurry of research works dedicated to developing new techniques that would be able to explain strong generalization performance of algorithms in this so called interpolation regime (see 6, 8 and references therein). Notably, while these algorithms do not strike a balance between model complexity and fit to the data in the traditional sense, fundamentally, they still minimize the empirical risk as a proxy to test performance. To summarize, in the classical and modern regimes alike, learning methods are thought of as minimizing some combination of the training error and generalization gap, with reasoning that relies in one way or another on the following trivial, yet arguably most profound, bound: test-error ≤ train-error + |generalization gap| . (1) In this work, we focus on stochastic gradient descent (SGD)—the canonical algorithm for training machine learning models nowadays—and ask whether its generalization performance can be understood through a similar lens. We consider the fundamental stochastic convex optimization (SCO) framework, in which it is well known that SGD minimizes the population risk at a rate of 𝑂 (1/ √ 𝑛) [23]. Remarkably, the classical analysis targets the population risk directly, and in contrast with other generalization arguments, at least seemingly does not rely on the above bound. This highlights an intriguing question: Are these quantities, so fundamental to learning theory, relevant to the way that SGD “works”? Put differently, is it possible to provide a more “conventional" analysis of SGD that conforms with (1)? Our main result shows that, perhaps surprisingly, there exist convex learning problems where the above bound becomes vacuous for SGD: namely, SGD minimizes the population risk, but at the same time, it does not minimize the empirical risk and thus exhibits constant generalization gap. This accords neither with the traditional viewpoint nor with that of interpolation, as both recognize the empirical risk as the principal minimization objective. We refer to this phenomenon as benign underfitting: evidently, SGD underfits the training data, but its classical analysis affirms this underfitting to be benign, in the sense that test performance is never compromised as a result. Our construction presents a learning problem where the output of SGD with step size η over 𝑛 i.i.d. training examples isΩ(η √ 𝑛) sub-optimal w.r.t. the best fit possible, and consequently has a generalization gap of the same order. Notably, with the standard step size choice of 1/ √ 𝑛 necessary to ensure the population risk converges at the optimal rate this lower bound amounts to a constant. Many previously plausible explanations for generalization properties of this algorithm are thereby rendered inadequate, at least in the elementary convex setup we consider here. First, it is clear that SGD cannot be framed as any reasonable regularized empirical risk minimization procedure for the simple reason that it does not minimize the empirical risk, which challenges the implicit regularization viewpoint to the generalization of SGD. Second, any attempt to explain generalization of SGD by uniform convergence over any (possibly data-dependent) hypotheses set cannot hold, simply because the sample average associated with the very same training set SGD was trained on is not necessarily close to its respective expectation. Finally, as it turns out, SGD provides for a strikingly natural example of an algorithm that generalizes well but is not stable in any sense, as the most general notion of algorithmic stability is entirely equivalent to the generalization gap [32]. We then move on to study the generalization gap and empirical risk guarantees of SGD in a broader context. We study the case of non-convex and strongly convex component functions, and present natural extensions of our basic result. In addition, we analyse the variant of SGD where datapoints are sampled with-replacement from the training set, in which case the train error is of course low but perhaps surprisingly the population risk is well behaved. Finally, we make the natural connection to the study of without-replacement SGD for empirical risk minimization, and derive upper and lower bounds for the multi-epoch regime. These last two points are discussed in further detail in the following. With vs without-replacement SGD. We may view one-pass SGD as processing the data via without-replacement sampling from the training set, as randomly reshuffling the examples does not change their unconditional distribution. Thus, it is interesting to consider the generalization gap of the closely related algorithm given by running SGD over examples sampled with-replacement from the training set. Considering instability (see the supplementary for a detailed discussion) of SGD for non-smooth losses and the fact that this variant targets the empirical objective, a priori it would seem this algorithm would overfit the training set and not provide strong population risk guarantees. Surprisingly, our analysis presented in Section 4 reveals this is not the case, and that with a certain iterate averaging scheme the population risk converges at the optimal rate. Consequently, it turns out the generalization gap is well bounded, and therefore that this variant constitutes a natural learning rule that is not stable in any sense but the most general one. Without-replacement SGD for empirical risk minimization. The example featured in our main construction implies a lower bound of Ω(𝑛−1/4) on the convergence rate of a single epoch of withoutreplacement SGD for finite sum optimization problems. In this setting, we have a set of 𝑛 convex losses and we wish to minimize their sum by running SGD over random shufflings of the losses. While the smooth case has been studied extensively (e.g., [28, 27, 20, 31]), the non-smooth case has hardly received much attention. In Section 5 we extend our basic construction to a lower bound for the multi-epoch regime, and complement it with nearly matching upper bounds. Our techniques. Fundamentally, we exploit the fact that dimension independent uniform convergence does not hold in SCO [32]. This is a prerequisite to any attempt at separating train and test losses of any hypothesis vector, let alone that produced by SGD. Another essential condition is the instability of SGD for non-smooth losses, as any form of stability would immediately imply a generalization gap upper bound regardless of uniform convergence. Our main lower bound draws inspiration from constructions presented in the works of [7] and [1], both of which rely on instability, the latter also exploiting failure of uniform convergence. However, neither of these contains the main ideas necessary to provoke the optimization dynamics required in our example. A crucial ingredient in our construction consists of encoding into the SGD iterate information about previous training examples. This, combined with careful design of the loss function, gradient oracle and population distribution, allows correlating sub-gradients of independent training examples, and in turn guiding the SGD iterates to ascend the empirical risk. 1.1 Summary of main contributions To summarize, the main contributions of the paper are as follows: • One-pass SGD in SCO. In Section 3, we study the basic SCO setup where the component losses are assumed to be individually convex, and present a construction where the expected empirical risk and therefore the generalization gap are both Ω(η √ 𝑛). We also provide extensions of our main construction demonstrating; – SCO with non-convex component functions may exhibit cases of benign overfitting, where 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤) ] = Ω(η2𝑛). – In SCO with λ-strongly convex losses the worst case generalization gap is Ω(1/λ √ 𝑛) for the standard step size choice. • With vs without replacement SGD in SCO. In Section 4, we prove the variant of SGD where the training examples are processed via sampling with-replacement from the training set minimizes the population risk at the optimal rate, and thus enjoys a generalization gap upper bound bound of 𝑂 (1/ √ 𝑛). • Multi-epoch without-replacement SGD. In Section 5, we study convergence rates of withoutreplacement SGD for finite sum convex optimization problems. We prove a lower bound of Ω(𝑛−1/4𝐾−3/4) on the optimization error after 𝐾 epochs over 𝑛 convex losses, and complement with upper bounds of 𝑂 (𝑛−1/4𝐾−1/2) and 𝑂 (𝑛−1/4𝐾−1/4) for respectively the multi-shuffle and single-shuffle SGD variants. 1.2 Additional related work Gradient descent, algorithmic stability and generalization. Closely related to our work is the study of stability properties of SGD. For smooth losses, [14] provide upper bounds on the generalization gap by appealing to uniform stability, yielding an 𝑂 (1/ √ 𝑛) rate for a single epoch of 𝑛 convex losses and the standard step size choice. In a later work, [7] prove tight rates for uniform stability of SGD in the setting of non-smooth losses, establishing these scale substantially worse; Θ(η √ 𝑛) for step size η and 𝑛 training examples. Our work shows that in fact the worst case rate of the generalization gap completely coincides with the uniform stability rate of SGD. A number of works prior to ours studied the extent to which SGD can be explained by implicit regularization in SCO. [16] study the setup where losses are smooth but only required to be convex in expectation, and show SGD may successfully learn when regularized ERM does not. Prior to their work, [11] also rule out a wide range of implicit regularization based explanations of SGD in the basic SCO setup with convex losses. On a more general level, our work is related to the study of stability and generalization in modern learning theory, pioneered by [9, 32]. In particular, the failure of (dimension independent) uniform convergence in SCO was established in [32]. The work of [13] improves the dimension dependence in the construction of [32] from exponential to linear in the number of training examples. Notably, the construction featured in our main result requires the dimension to be exponential in the sample size, however the techniques of [13] do not readily extend to our setting. Thus, the optimal dimension dependence for a generalization gap lower bound is left for future work. Without-replacement SGD for empirical risk minimization. A relatively long line of work studies convergence properties of without-replacement SGD from a pure optimization perspective (e.g., [28, 20, 30, 27, 19, 31]). Nearly all the papers in this line of work adopt the smoothness assumption, with near optimal bounds established by [20]. An exception is the paper of [33] where an 𝑂 (1/ √ 𝑛𝐾) upper bound is obtained for 𝑛 datapoints and 𝐾 epochs, albeit only for generalized linear models over a bounded domain — notably, a setting where uniform convergence holds. Prior to this thread of research, [22] prove a convergence rate of 𝑂 (𝑛/ √ 𝐾) for non-smooth loss functions that applies for any ordering of the losses. To the best of our knowledge, this is also the state-of-the-art result for without-replacement SGD in the non-smooth setting without further assumptions on the loss functions. Benign overfitting vs. benign underfitting. While both benign underfitting and benign overfitting challenge traditional generalization techniques, that postulate the training error to represent the test error, as we discuss above these two phenomena point to very different regimes of learning. In particular, [34] shows that benign overfitting requires distributional assumptions for the interpolating algorithm to succeed. In contrast, we show that benign underfitting happens for SGD in a setting where it provably learns (namely, SCO), without any distributional assumptions. We also point out that Corollary 1 shows benign overfitting cannot happen in the setup we consider, hence the two phenomena seem to rise in different setups. Explaining generalization of interpolators. As already discussed, there is a large recent body of work dedicated to understanding why over-parameterized models trained by SGD to zero training error generalize well [6, 8, and references therein]. In particular, the work of [5] aims at explaining the phenomenon for high dimensional linear models. Some recent papers investigate limitations of certain techniques in explaining generalization of interpolating algorithms: [21] show uniform convergence fails to explain generalization of SGD in a setup where the generalization gap is in fact well bounded, thus in sharp contrast to our work; [3] rule out the possibility of a large class of excess risk bounds to explain generalization of minimum norm interpolants. Unlike our work, they study properties of possible risk bounds when benign overfitting occurs, and thus do not pertain to SGD that never benignly overfits in SCO. 2 Preliminaries We consider stochastic convex optimization (SCO) specified by a population distribution Z over a datapoint set 𝑍 , and loss function 𝑓 : 𝑊 × 𝑍 → ℝ where𝑊 ⊂ ℝ𝑑 is convex and compact. We denote 𝐹 (𝑤) B 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧), (population loss) 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖), (empirical loss) where {𝑧1, . . . , 𝑧𝑛} ⊆ 𝑍 stands for the training set, which we regularly denote by 𝑆. We let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the population minimizer, and 𝑤★𝑆 B min𝑤∈𝑊 𝐹 (𝑤) denote the empirical risk minimizer (ERM). The diameter of𝑊 is defined by max𝑥,𝑦∈𝑊 {∥𝑥 − 𝑦∥} where ∥·∥ denotes the euclidean norm, and B𝑑0 (1) B { 𝑥 ∈ ℝ𝑑 | ∥𝑥∥ ≤ 1 } denotes the 𝐿2 unit ball in ℝ𝑑 . Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛 and a learning algorithm that outputs a hypothesis 𝑤𝑆 , we define the generalization gap to be the absolute value of the expected difference between test and train losses; 𝔼𝑆∼Z𝑛 [𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆)] . (generalization gap) Throughout most of the paper, we consider one-pass projected SGD over 𝑆; initialize at 𝑤1 ∈ 𝑊 ; for 𝑡 = 2, . . . , 𝑛 : 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η𝑔𝑡 ) , with 𝑔𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ), where 𝜕 𝑓 (𝑤; 𝑧) denotes the set of sub-gradients of 𝑓 (·; 𝑧) → ℝ at the point𝑤 ∈ 𝑊 , andΠ𝑊 : ℝ𝑑 → 𝑊 the projection operation onto𝑊 . 3 A generalization gap lower bound for SGD In this section, we establish our main result; that there exist convex learning problems where SGD incurs a large optimization error and therefore also a large generalization gap. When losses are convex these two quantities are closely related since in expectation, the empirical risk minimizer cannot significantly outperform the population minimizer (a claim that will be made rigorous shortly after our main theorem). Our construction builds on losses that are highly non-smooth, leading to SGD taking gradient steps that actually ascend the empirical objective. Theorem 1. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, and 𝑊 = B2𝑑0 (1). Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ such that running SGD initialized at 𝑤1 = 0, with step size η > 0 over 𝑆 ∼ Z𝑛 yields; (i) a large optimization error; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , (ii) a large generalization gap; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , where 𝑤𝑆 is any suffix average of the iterates. In particular, for η = Θ(1/ √ 𝑛), the population risk is 𝔼 [𝐹 (𝑤𝑆) − 𝐹 (𝑤★)] = 𝑂 (1/ √ 𝑛), while the generalization gap and training error are both Ω (1) . A detailed proof of Theorem 1 is deferred to the supplementary; in the following we provide an informal overview containing its principal ingredients. Proof sketch. Let 𝑍 B {0, 1}𝑑 , and consider a population distribution Z such that 𝑧(𝑖) = 1 with probability δ. We will use a loss function of the form 𝑓 (𝑤; 𝑧) B ∥𝑧 ⊙ 𝑤∥ + φ(𝑤; 𝑧), where ⊙ denotes element-wise product. The high level idea is that the norm component penalizes 𝑤’s that correlate with the given sample point 𝑧, and the φ function (the details of which are left for the supplementary) is tailored so that it drives the SGD iterates precisely to those areas in the 𝐿2 ball where it correlates with the training set {𝑧1, . . . , 𝑧𝑛}. In addition, the choice of parameters is such that the population loss is approximately zero over the entire domain. Taking 𝑑 sufficiently large compared to δ−1, we ensure that w.h.p., for every round 𝑡 ∈ [𝑛] there exist many coordinates 𝑖 ∈ [𝑑] with a prefix of ones; 𝑧1 (𝑖) = · · · = 𝑧𝑡−1 (𝑖) = 1 . With δ chosen sufficiently small compared to 𝑛, we ensure that as long as 𝑖 ∈ [𝑑] is any coordinate chosen independently of {𝑧𝑡+1, . . . , 𝑧𝑛}, w.h.p. this coordinate will have a suffix of zeros; 𝑧𝑡+1 (𝑖) = · · · = 𝑧𝑛 (𝑖) = 0. Our goal is to make SGD take steps 𝑤𝑡+1 ≈ 𝑤𝑡 − η𝑒𝑖𝑡 (where 𝑒𝑖 denotes the 𝑖’th standard basis vector) where 𝑖𝑡 ∈ [𝑑] is a coordinate with the aforementioned property of having a prefix of ones followed by a suffix of zeros. Note that since these steps are taken after the prefix of ones has ended, they will inflict large empirical loss from the norm component, but will not be “corrected” by future steps owed to the suffix of zeros. To achieve this, we design φ so that it encodes the relevant information into the SGD iterates. Specifically, φ “flags” (using some extra dimensions) all coordinates 𝑖 ∈ [𝑑] where a prefix of ones has been encountered. In addition, using another max component in φ we have that for all such coordinates 𝑖, 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧) for any example 𝑧 (as this component in the loss depends only on the iterate 𝑤𝑡 ). In particular, we get that 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ). Then, our gradient oracle just returns a subgradient pointing towards one of these coordinates (for convenience, we use the minimal one) which we denote by 𝑖𝑡 , and SGD makes the desired step. Notably, the coordinate 𝑖𝑡 chosen by the subgradient oracle is independent of future examples, and therefore will have a suffix of zeros w.h.p. Hence, as mentioned, this ensures no gradient signal after round 𝑡 will be able to correct the empirical risk ascent on 𝑖𝑡 . Concluding, we have that for the final iterate 𝑤 B 𝑤𝑛+1, we get 𝑤(𝑖𝑡 ) = −η for all 𝑡 ∈ [𝑛], therefore 𝐹 (𝑤) = 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖) ≈ 1 𝑛 𝑛∑︁ 𝑖=1 ∥𝑧𝑖 ⊙ 𝑤∥ ≈ ∥𝑤∥ ≈ √︃ η2𝑛 = η √ 𝑛. A similar argument requiring a few more technical steps shows the same is true for any suffix average 𝑤. Noting that 𝐹 (0) = 0, we get that the optimization error is Ω(η √ 𝑛). The implication for the generalization gap follows immediately with the standard step size choice of η = 1/ √ 𝑛, owed to SGD’s population risk convergence guarantee. For an arbitrary step size, the result follows from a simple computation, and the proof is concluded. □ The magnitude of the generalization gap featured in Theorem 1 stems from the large optimization error, which results in the empirical risk over-estimating the population risk by a large margin. Evidently, for convex losses the converse is always false; the empirical risk will never significantly under-estimate the population risk (a fact that will turn out false when losses are only required to be convex in expectation — see Section 3.1). Indeed, stability of the regularized ERM solution implies the ERM does not perform significantly better on the training set compared to the population minimizer 𝑤★. Lemma 1. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ convex and 𝐺-Lipschitz in the first argument. Then 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★ 𝑆 ) ] ≤ 4𝐺𝐷√ 𝑛 . Proof. Denote the regularized ERM by 𝑤λ 𝑆 B arg min𝑤∈𝑊 { 1 𝑛 ∑𝑛 𝑖=1 𝑓𝑖 (𝑤; 𝑧𝑖) + λ2 ∥𝑤∥ 2} . Observe, 𝐹 (𝑤★) ≤ 𝔼𝐹 (𝑤λ𝑆) ≤ 𝔼𝐹 (𝑤 λ 𝑆) + 4𝐺2 λ𝑛 ≤ 𝔼𝐹 (𝑤★𝑆) + λ 2 𝐷2 + 4𝐺 2 λ𝑛 , where the second inequality follows from stability of the regularized ERM (see Lemma 13). Choosing λ B 2𝐺𝐷/ √ 𝑛, we get that 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] = 𝐹 (𝑤★) − 𝔼𝐹 (𝑤★𝑆) ≤ 4𝐺𝐷 √ 𝑛 , as claimed. □ Since the optimization error is always positive, we see that the upper bound given by Lemma 1 implies an upper bound on the difference between the population and empirical risks. Corollary 1. For any distribution Z over 𝑍 and Lipschitz loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex in the first argument, running SGD with step size η B 1/ √ 𝑛 guarantees 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] ≤ 𝑂 (1/ √ 𝑛). Proof. We have, 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★) ] + 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] The population error term on the RHS is 𝑂 (1/ √ 𝑛) by the classical analysis of SGD. The second term is bounded by Lemma 1; 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] ≤ 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] ≤ 4𝐺𝐷/ √ 𝑛, and the result follows. □ In the subsections that follow we continue to study the generalization gap in the context of common variants to the basic SCO setup. 3.1 SCO with non-convex components When we relax the convexity assumption and only require the losses to be convex in expectation, we can construct a learning problem where SGD exhibits a case of benign overfitting. In contrast to Theorem 1, here we actually drive the SGD iterates towards an ERM solution, thus achieving a low optimization error and an empirical risk that under-estimates the population risk. Theorem 2. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and η ≤ 1/ √ 𝑛. Then there exists a distribution Z over 𝑍 and a 4-Lipschitz loss 𝑓 : 𝑊 × 𝑍 → ℝ where 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧) is convex in 𝑤, such that for any suffix average 𝑤 of SGD initialized at 𝑤1 = 0, with step size η; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω(η2𝑛). The construction and proof of Theorem 2 given in the supplementary follow a methodology similar to that of Theorem 1. Here however, we exploit non convex losses to form an empirical loss landscape where the ERM solution significantly outperforms the population minimizer 𝑤★ (notably, a feat not possible when losses are individually convex, by Corollary 1). Our loss function is defined by 𝑓 (𝑤; 𝑧) B ∑𝑑 𝑖=1 𝑧(𝑖)𝑤(𝑖)2 + φ(𝑤; 𝑧), with each component playing a similar role as before. We work with the distribution 𝑧 ∼ {0, 1}𝑑 where 𝑧(𝑖) = 1 w.p. δ, 𝑧(𝑖) = −1 w.p. δ, and 𝑧(𝑖) = 0 w.p. 1 − 2δ. The intuition is that coordinates accumulating many −1’s offer regions in the 𝐿2 ball where the empirical risk is “too good” compared to the population risk. We tailor the extra dimensions and φ in coordination with the −1 values so that the sub-gradients guide the SGD iterates towards these regions, in exactly the same manner the construction of Theorem 1 drives the iterates to high loss regions. We note that while the statement of Theorem 2 is specialized to step size smaller than 1/ √ 𝑛, it may be extended to any step size using arguments similar to those given in the proof of Theorem 1. 3.2 SCO with strongly convex components Our basic construction extends to the strongly convex case by making only technical modification to Theorem 1. The theorem below concerns the standard step size choice for strongly convex objectives. We provide its proof in the supplementary. Theorem 3. Let 𝑛 ∈ ℕ, 𝑛 ≥ 10, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and λ ≥ 1/ √ 𝑛. Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz, λ-strongly convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ (i) the optimization error is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( 1 λ √ 𝑛 ) , (ii) the generalization gap is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( 1 λ √ 𝑛 ) , where 𝑤𝑆 is any suffix average of SGD initialized at 𝑤1 = 0, with step size schedule η𝑡 = 1/λ𝑡. Furthermore, the problem instance where this occurs is precisely the λ regularized version of the example featured in Theorem 1. We note that an immediate implication of the above theorem is that if we seek a generalization gap upper bound for a weakly convex problem by means of regularization (meaning, by running SGD on a regularized problem), we would have to take λ ≥ 1 to guarantee a gap of 𝑂 (1/ √ 𝑛). To see this, note that the generalization gap (of any hypothesis) of the regularized problem is the same as that of the original. On the other hand, taking λ ≥ 1 will of course be detrimental to the population error guarantee. Hence, one cannot circumvent the generalization gap lower bound by regularization without compromising the population error. We conclude this section with a note regarding stability rates of SGD in non-smooth SCO. Implicit in Theorem 1, is that average stability of SGD coincides with the tight uniform stability rate of Θ(η √ 𝑛) established by [7]. This is because Theorem 1 provides the Ω(η √ 𝑛) lower bound on the most general stability notion, which is precisely the generalization gap [32]. We refer the reader to the supplementary for a more elaborate discussion. 4 SGD with vs without replacement In this section, we consider a different algorithm in the context of the basic SCO setup; SGD over examples drawn with-replacement from the training set. This is not to be confused with one-pass SGD discussed in Section 3, which corresponds to without-replacement SGD on the training set, or alternatively with-replacement SGD over the population distribution. Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛, we define with-replacement projected SGD initialized at 𝑤1 ∈ 𝑊 by 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η̂𝑡 ) , where ̂𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; ̂𝑡 ) and ̂𝑡 ∼ Unif (𝑆). Perhaps surprisingly, this version of SGD does not overfit the training data; our theorem below establishes that with proper iterate averaging, the population risk converges at the optimal rate. Theorem 4. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z be any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ be convex and 𝐺-Lipschitz in the first argument. Let 𝑆 ∼ Z𝑛 be a training set of 𝑛 ∈ ℕ datapoints drawn i.i.d. from Z, and consider running SGD over training examples sampled with-replacement, uniformly and independently from 𝑆. Then, for step size η = 𝐷 𝐺 √ 𝑛 and 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 , the following upper bound holds; 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷√ 𝑛 . Proof. Fix a time-step 𝑡 ∈ [𝑛], and observe that if we don’t condition on 𝑆, we may view the random datapoint ̂𝑡 as a mixture between a fresh i.i.d. sample from the population and a uniformly distributed sample from the previously processed datapoints 𝑆𝑡−1 B {̂1, . . . , ̂𝑡−1}; ̂𝑡 | 𝑆𝑡−1 = { 𝑧 ∼ Z w.p. 1 − 𝑡−1 𝑛 , 𝑧 ∼ Unif (𝑆𝑡−1) w.p. 𝑡−1𝑛 . With this in mind, denote ̂𝑡 (𝑤) B 𝑓 (𝑤; ̂𝑡 ), fix 𝑆𝑡−1 and observe: 𝔼̂𝑡 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) | 𝑆𝑡−1 ] = ( 1 − 𝑡 − 1 𝑛 ) 𝔼𝑧∼Z [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] + 𝑡 − 1 𝑛 1 𝑡 − 1 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤𝑡 ) − ̂𝑖 (𝑤★). Rearranging and taking expectation with respect to 𝑆𝑡−1 we obtain( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] = 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 𝔼 [ 1 𝑛 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤★) − ̂𝑖 (𝑤𝑡 ) ] ≤ 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 4𝐺𝐷 √ 𝑡 𝑛 , (2) where the inequality follows from Lemma 1. Now, by a direct computation we have ∑𝑛 𝑡=1 ( 1 − 𝑡−1 𝑛 ) = 𝑛+1 2 , which motivates setting 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 . By convexity of 𝐹, Eq. (2), and the standard regret analysis of gradient descent [e.g., 15] we now have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 ( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝐹 (𝑤𝑡 ) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 2 𝑛 + 1 𝑛∑︁ 𝑡=1 4𝐺𝐷 √ 𝑡 𝑛 ≤ 2 𝑛 𝔼 [ 𝑛∑︁ 𝑡=1 ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 8𝐺𝐷√ 𝑛 ≤ 2 𝑛 ( 𝐷2 2η + η𝐺 2 2 ) + 8𝐺𝐷√ 𝑛 = 10𝐺𝐷 √ 𝑛 , where the last inequality follows by our choice of η = 𝐷 𝐺 √ 𝑛 . □ Evidently, the averaging scheme dictated by Theorem 4 does little to hurt the empirical risk convergence guarantee, which follows from the standard analysis with little modifications (for completeness we provide a formal statement and proof in the supplementary). Combined with Lemma 1, this immediately implies a generalization gap upper bound for with-replacement SGD. Notably, this shows with-replacement SGD provides for an example of a (natural) algorithm in the SCO learning setup that is not even stable on-average, but nonetheless has a well bounded generalization gap. We refer the reader to the discussion in the supplementary for more details. Corollary 2. For any distribution Z and loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex and Lipschitz in the first argument, running SGD with step size and averaging as specified in Theorem 4 ensures 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝑂 (1/√𝑛). Proof. We have; 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤★)] + 𝔼[𝐹 (𝑤★) − 𝐹 (𝑤★𝑆)] + 𝔼[𝐹 (𝑤★𝑆) − 𝐹 (𝑤)] . The first term is upper bounded by convergence of the population risk provided by Theorem 4, the second by Lemma 1, and the third by the standard analysis of SGD (see the supplementary). □ 5 Multi-epoch SGD for empirical risk minimization In this section, we forgo the existence of a population distribution and discuss convergence properties of without-replacement SGD (wor-SGD) for finite sum optimization problems. A relatively long line of work discussed in the introduction studies this problem in the smooth case. The work of [20] noted smoothness is a necessary assumption to obtain rates that are strictly better than the 𝑂 (1/ √ 𝑛𝐾) guaranteed by with-replacement SGD for 𝑛 losses and 𝐾 epochs, due to a lower bound that follows from the deterministic case (e.g., [10]). Here we establish that smoothness is in fact necessary to obtain rates that are not strictly worse than with-replacement SGD. We consider running multiple passes of wor-SGD to solve the finite sum optimization problem given by the objective 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑡=1 𝑓 (𝑤; 𝑡) (3) where { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 is a set of 𝑛 convex, 𝐺-Lipschitz losses defined over a convex and compact domain 𝑊 ⊆ ℝ𝑑 . Throughout this section we let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the minimizer of the objective Eq. (3). In every epoch 𝑘 ∈ [𝐾] we process the losses in the order specified by a permutation π𝑘 : [𝑛] ↔ [𝑛] sampled uniformly at random, either once in the beginning of the algorithm (single-shuffle), or at the onset of every epoch (multi-shuffle). Multi-epoch wor-SGD initialized at 𝑤11 ∈ 𝑊 is specified by the following equations; 𝑤𝑘𝑡+1 ← Π𝑊 (𝑤 𝑘 𝑡 − η𝑔𝑘𝑡 ), where 𝑔𝑘𝑡 ∈ 𝜕 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) 𝑤𝑘+11 B 𝑤 𝑘 𝑛+1, where we denote 𝑓 𝑘𝑡 (𝑤) B 𝑓 (𝑤;π𝑘 (𝑡)). A near-immediate implication of Theorem 1 is that there exists a set of convex losses on which a single epoch of wor-SGD cannot converge at a rate faster than 1/𝑛1/4. Theorem 5 presented below extends our basic construction from Theorem 1 to accommodate multiple epochs. The main challenge here is in devising a mechanism that will allow fresh bad gradient steps to take place on every new epoch. Theorem 5. Let 𝑛, 𝐾 ∈ ℕ, 𝐾 ≥ 4, 𝑛 ≥ 4, 𝑐 B 4/(21/𝐾 − 1), 𝑑 ≥ 26𝑛 log(𝑐𝑛𝐾) , and 𝑊 = B𝑑′0 (1) where 𝑑 ′ = (𝑛𝐾 + 1)𝑑. Then there exists a set of 𝑛 convex, 4-Lipschitz losses such that after 𝐾 epochs of either multi-shuffle or single-shuffle SGD initialized at 𝑤11 = 0 with step size η ≤ 1/ √ 2𝑛𝐾 , it holds that 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤∗)] = Ω ( min { 1, η √︂ 𝑛 𝐽 + 1 η𝑛𝐾 + η }) , where 𝑤 is any suffix average of the last 𝐽 epochs. In particular, we obtain a bound of Ω ( 𝑛−1/4𝐾−3/4 ) for any suffix average and any choice of η. The proof of Theorem 5 is provided in the supplementary. The construction in the proof takes the idea that the training set can be encoded in the SGD iterate to the extreme. The loss function and gradient oracle are designed in such a way so as to record the training examples in their full form and order into the iterate. We then exploit this encoded information with an “adversarial” gradient oracle that returns the bad sub-gradients on each gradient step in every new epoch. Next, we complement Theorem 5 with an upper bound that builds on stability arguments similar to those of the smooth case [20]. Importantly though, lack of smoothness means worse stability rates and necessitates extra care in the technical arguments. Below, we prove the multi-shuffle case, and defer the full details for the single-shuffle case to the supplementary. Theorem 6. Let 𝑆 = { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 be a set of 𝑛 convex, 𝐺-Lipschitz losses over a convex and compact domain𝑊 ⊆ ℝ𝑑 of diameter 𝐷, and consider running 𝐾 ≥ 1 epochs of wor-SGD over 𝑆. Then, we have the following guarantees: (i) For multi-shuffle, with step-size η = 𝐷/(𝐺𝑛3/4𝐾1/2), we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 . (ii) For single-shuffle, with step-size η = 𝐷/(2𝐺𝑛3/4𝐾3/4) and assuming 𝐾 ≥ 𝑛, we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷 𝑛1/4𝐾1/4 . In both of the above bounds, 𝑤 = 1 𝑛𝐾 ∑ 𝑘∈[𝐾 ],𝑡 ∈[𝑛] 𝑤 𝑘 𝑡 , and the expectation is over the random permutations of losses. Proof (multi-shuffle case). Observe; 𝐹 (𝑤) − 𝐹 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝐹 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 𝐷2 2η𝑛𝐾 + η𝐺 2 2 , with the last inequality following from the standard 𝑛𝐾 round regret bound for gradient descent [see e.g., 15]. To bound the other term, using Lemma 10, we relate the difference between the without-replacement loss distribution and the full batch objective to the uniform stability rate of SGD, which may then be bounded by applying Lemma 11: 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 )) ] = 𝔼π1 ,...,π𝑘−1𝔼π𝑘 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) | 𝑤𝑘1 ] ≤ 𝔼π1 ,...,π𝑘−1 [ 𝐺ϵSGDstab (𝑡 − 1) ] = 𝐺ϵSGDstab (𝑡 − 1) ≤ 2η𝐺2 √ 𝑡. Concluding, we have that 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) ] + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 η𝐺2 √ 𝑡 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2η𝐺2 √ 𝑛 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 , where the last inequality follows from our choice of η = 𝐷/(𝐺𝑛3/4𝐾1/2). □ Acknowledgements and funding disclosure This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 882396), by the Israel Science Foundation (grants number 993/17, 2549/19, 2188/20), by the Len Blavatnik and the Blavatnik Family foundation, by the Yandex Initiative in Machine Learning at Tel Aviv University, by a grant from the Tel Aviv University Center for AI and Data Science (TAD), and by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What are the strengths and weaknesses of the paper's technical results? 2. How does the reviewer perceive the conclusions drawn from the results, and what specific claims in the introduction do they find misleading or false? 3. How does the reviewer view the setting considered in the paper in relation to practical scenarios? 4. What minor comments or suggestions does the reviewer offer regarding the presentation of the results?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors provide a stochastic convex optimisation framework in which one passe SGD (classically) minimises the population risk at a O ( 1 / n ) rate but however exhibits a Ω ( 1 ) training error and generalisation error. Strengths And Weaknesses The paper is well written. Each result is clear, introduced and commented. The analysis is rigorous and non trivial. Overall I have no criticism concerning the technical results, which I believe are original. However the main results in section 3 seem anedoctal in the sense that (1) nobody does one pass SGD (2) the considered setting is very peculiar and clearly not encountered on a daily basis: cherry picked distribution and loss, d ≥ 2 n log ⁡ n . This on its own would not bother me (as mentioned, I find the results original and interesting), however my main concern are the conclusions / interpretations which are drawn from the results. Overall I find that the introduction is very misleading, here are several claims in it which I believe are misleading / false: "First, it is clear that SGD cannot be framed as any reasonable regularized empirical risk minimization procedure for the simple reason that it does not minimize the empirical risk, which challenges the implicit regularization viewpoint to the generalization of SGD": the implicit regularization viewpoint is crucial in the interpolation setting (setting which is met in practice). Hence I cannot agree with your statement: it does not challenge the viewpoint as you consider a totally different setting. "Second, any attempt to explain generalization of SGD by uniform convergence over any (possibly data-dependent) hypotheses set cannot hold, simply because the sample average associated with the very same training set SGD was trained on is not necessarily close to its respective expectation": (1) your result holds only for one pass SGD (which nobody does in practice), hence one could expect the classical train / generalisation gap to hold for multiple passes in the abstract: "Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis)": again this only holds for one pass SGD which is never considered in practice. Minor comments: in the population risk convergence, it would have been clearer to put the exact convergence bound instead of a big O ( ) so that it is clear that there is no hidden dependence in the dimension d which would have killed the rate. a conclusion would have been appreciated line 183: w S ∗ = arg ⁡ min w F ^ ( w ) : why is the argmin unique ? if not unique which one do you choose ? Questions N/A Limitations N/A
NIPS
Title Benign Underfitting of Stochastic Gradient Descent Abstract We study to what extent may stochastic gradient descent (SGD) be understood as a “conventional” learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, without-replacement) SGD is classically known to minimize the population risk at rate O (1/ √ n), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. N/A √ 𝑛), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. 1 Introduction Conventional wisdom in statistical learning revolves around what is traditionally known as the bias-variance dilemma; the classical theory stipulates the quality of fit to the training data be in a trade-off with model complexity, aiming for a sweet spot where training error is small but yet representative of performance on independent test data. This perspective is reflected in the vast majority of generalization bound techniques offered by contemporary learning theory. Uniform convergence approaches [36, 4] seek capacity control over the model function class, and employ uniform laws of large numbers to argue convergence of sample averages to their respective expectations. Algorithmic stability [9, 32] on the other hand, builds on controlling sensitivity of the learning algorithm to small changes in its input, and provides algorithm dependent bounds. Nevertheless, despite the conceptual and technical differences between these two methods, both ultimately produce risk bounds by controlling the training error, and the generalization gap. The same is true for many other techniques, including sample compression [17, 2], PAC-Bayes [18, 12], and information theoretic generalization bounds [29, 37, 24], to name a few. In recent years it has become clear there are other, substantially different, ways to manage the fit vs. complexity trade-off, that are in a sense incompatible with traditional generalization bound techniques. Evidently, heavily over-parameterized deep neural networks may be trained to perfectly 36th Conference on Neural Information Processing Systems (NeurIPS 2022). fit training data and generalize well nonetheless [38, 25, 26], thus seemingly disobeying conventional statistical wisdom. This phenomenon has garnered significant attention, with a flurry of research works dedicated to developing new techniques that would be able to explain strong generalization performance of algorithms in this so called interpolation regime (see 6, 8 and references therein). Notably, while these algorithms do not strike a balance between model complexity and fit to the data in the traditional sense, fundamentally, they still minimize the empirical risk as a proxy to test performance. To summarize, in the classical and modern regimes alike, learning methods are thought of as minimizing some combination of the training error and generalization gap, with reasoning that relies in one way or another on the following trivial, yet arguably most profound, bound: test-error ≤ train-error + |generalization gap| . (1) In this work, we focus on stochastic gradient descent (SGD)—the canonical algorithm for training machine learning models nowadays—and ask whether its generalization performance can be understood through a similar lens. We consider the fundamental stochastic convex optimization (SCO) framework, in which it is well known that SGD minimizes the population risk at a rate of 𝑂 (1/ √ 𝑛) [23]. Remarkably, the classical analysis targets the population risk directly, and in contrast with other generalization arguments, at least seemingly does not rely on the above bound. This highlights an intriguing question: Are these quantities, so fundamental to learning theory, relevant to the way that SGD “works”? Put differently, is it possible to provide a more “conventional" analysis of SGD that conforms with (1)? Our main result shows that, perhaps surprisingly, there exist convex learning problems where the above bound becomes vacuous for SGD: namely, SGD minimizes the population risk, but at the same time, it does not minimize the empirical risk and thus exhibits constant generalization gap. This accords neither with the traditional viewpoint nor with that of interpolation, as both recognize the empirical risk as the principal minimization objective. We refer to this phenomenon as benign underfitting: evidently, SGD underfits the training data, but its classical analysis affirms this underfitting to be benign, in the sense that test performance is never compromised as a result. Our construction presents a learning problem where the output of SGD with step size η over 𝑛 i.i.d. training examples isΩ(η √ 𝑛) sub-optimal w.r.t. the best fit possible, and consequently has a generalization gap of the same order. Notably, with the standard step size choice of 1/ √ 𝑛 necessary to ensure the population risk converges at the optimal rate this lower bound amounts to a constant. Many previously plausible explanations for generalization properties of this algorithm are thereby rendered inadequate, at least in the elementary convex setup we consider here. First, it is clear that SGD cannot be framed as any reasonable regularized empirical risk minimization procedure for the simple reason that it does not minimize the empirical risk, which challenges the implicit regularization viewpoint to the generalization of SGD. Second, any attempt to explain generalization of SGD by uniform convergence over any (possibly data-dependent) hypotheses set cannot hold, simply because the sample average associated with the very same training set SGD was trained on is not necessarily close to its respective expectation. Finally, as it turns out, SGD provides for a strikingly natural example of an algorithm that generalizes well but is not stable in any sense, as the most general notion of algorithmic stability is entirely equivalent to the generalization gap [32]. We then move on to study the generalization gap and empirical risk guarantees of SGD in a broader context. We study the case of non-convex and strongly convex component functions, and present natural extensions of our basic result. In addition, we analyse the variant of SGD where datapoints are sampled with-replacement from the training set, in which case the train error is of course low but perhaps surprisingly the population risk is well behaved. Finally, we make the natural connection to the study of without-replacement SGD for empirical risk minimization, and derive upper and lower bounds for the multi-epoch regime. These last two points are discussed in further detail in the following. With vs without-replacement SGD. We may view one-pass SGD as processing the data via without-replacement sampling from the training set, as randomly reshuffling the examples does not change their unconditional distribution. Thus, it is interesting to consider the generalization gap of the closely related algorithm given by running SGD over examples sampled with-replacement from the training set. Considering instability (see the supplementary for a detailed discussion) of SGD for non-smooth losses and the fact that this variant targets the empirical objective, a priori it would seem this algorithm would overfit the training set and not provide strong population risk guarantees. Surprisingly, our analysis presented in Section 4 reveals this is not the case, and that with a certain iterate averaging scheme the population risk converges at the optimal rate. Consequently, it turns out the generalization gap is well bounded, and therefore that this variant constitutes a natural learning rule that is not stable in any sense but the most general one. Without-replacement SGD for empirical risk minimization. The example featured in our main construction implies a lower bound of Ω(𝑛−1/4) on the convergence rate of a single epoch of withoutreplacement SGD for finite sum optimization problems. In this setting, we have a set of 𝑛 convex losses and we wish to minimize their sum by running SGD over random shufflings of the losses. While the smooth case has been studied extensively (e.g., [28, 27, 20, 31]), the non-smooth case has hardly received much attention. In Section 5 we extend our basic construction to a lower bound for the multi-epoch regime, and complement it with nearly matching upper bounds. Our techniques. Fundamentally, we exploit the fact that dimension independent uniform convergence does not hold in SCO [32]. This is a prerequisite to any attempt at separating train and test losses of any hypothesis vector, let alone that produced by SGD. Another essential condition is the instability of SGD for non-smooth losses, as any form of stability would immediately imply a generalization gap upper bound regardless of uniform convergence. Our main lower bound draws inspiration from constructions presented in the works of [7] and [1], both of which rely on instability, the latter also exploiting failure of uniform convergence. However, neither of these contains the main ideas necessary to provoke the optimization dynamics required in our example. A crucial ingredient in our construction consists of encoding into the SGD iterate information about previous training examples. This, combined with careful design of the loss function, gradient oracle and population distribution, allows correlating sub-gradients of independent training examples, and in turn guiding the SGD iterates to ascend the empirical risk. 1.1 Summary of main contributions To summarize, the main contributions of the paper are as follows: • One-pass SGD in SCO. In Section 3, we study the basic SCO setup where the component losses are assumed to be individually convex, and present a construction where the expected empirical risk and therefore the generalization gap are both Ω(η √ 𝑛). We also provide extensions of our main construction demonstrating; – SCO with non-convex component functions may exhibit cases of benign overfitting, where 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤) ] = Ω(η2𝑛). – In SCO with λ-strongly convex losses the worst case generalization gap is Ω(1/λ √ 𝑛) for the standard step size choice. • With vs without replacement SGD in SCO. In Section 4, we prove the variant of SGD where the training examples are processed via sampling with-replacement from the training set minimizes the population risk at the optimal rate, and thus enjoys a generalization gap upper bound bound of 𝑂 (1/ √ 𝑛). • Multi-epoch without-replacement SGD. In Section 5, we study convergence rates of withoutreplacement SGD for finite sum convex optimization problems. We prove a lower bound of Ω(𝑛−1/4𝐾−3/4) on the optimization error after 𝐾 epochs over 𝑛 convex losses, and complement with upper bounds of 𝑂 (𝑛−1/4𝐾−1/2) and 𝑂 (𝑛−1/4𝐾−1/4) for respectively the multi-shuffle and single-shuffle SGD variants. 1.2 Additional related work Gradient descent, algorithmic stability and generalization. Closely related to our work is the study of stability properties of SGD. For smooth losses, [14] provide upper bounds on the generalization gap by appealing to uniform stability, yielding an 𝑂 (1/ √ 𝑛) rate for a single epoch of 𝑛 convex losses and the standard step size choice. In a later work, [7] prove tight rates for uniform stability of SGD in the setting of non-smooth losses, establishing these scale substantially worse; Θ(η √ 𝑛) for step size η and 𝑛 training examples. Our work shows that in fact the worst case rate of the generalization gap completely coincides with the uniform stability rate of SGD. A number of works prior to ours studied the extent to which SGD can be explained by implicit regularization in SCO. [16] study the setup where losses are smooth but only required to be convex in expectation, and show SGD may successfully learn when regularized ERM does not. Prior to their work, [11] also rule out a wide range of implicit regularization based explanations of SGD in the basic SCO setup with convex losses. On a more general level, our work is related to the study of stability and generalization in modern learning theory, pioneered by [9, 32]. In particular, the failure of (dimension independent) uniform convergence in SCO was established in [32]. The work of [13] improves the dimension dependence in the construction of [32] from exponential to linear in the number of training examples. Notably, the construction featured in our main result requires the dimension to be exponential in the sample size, however the techniques of [13] do not readily extend to our setting. Thus, the optimal dimension dependence for a generalization gap lower bound is left for future work. Without-replacement SGD for empirical risk minimization. A relatively long line of work studies convergence properties of without-replacement SGD from a pure optimization perspective (e.g., [28, 20, 30, 27, 19, 31]). Nearly all the papers in this line of work adopt the smoothness assumption, with near optimal bounds established by [20]. An exception is the paper of [33] where an 𝑂 (1/ √ 𝑛𝐾) upper bound is obtained for 𝑛 datapoints and 𝐾 epochs, albeit only for generalized linear models over a bounded domain — notably, a setting where uniform convergence holds. Prior to this thread of research, [22] prove a convergence rate of 𝑂 (𝑛/ √ 𝐾) for non-smooth loss functions that applies for any ordering of the losses. To the best of our knowledge, this is also the state-of-the-art result for without-replacement SGD in the non-smooth setting without further assumptions on the loss functions. Benign overfitting vs. benign underfitting. While both benign underfitting and benign overfitting challenge traditional generalization techniques, that postulate the training error to represent the test error, as we discuss above these two phenomena point to very different regimes of learning. In particular, [34] shows that benign overfitting requires distributional assumptions for the interpolating algorithm to succeed. In contrast, we show that benign underfitting happens for SGD in a setting where it provably learns (namely, SCO), without any distributional assumptions. We also point out that Corollary 1 shows benign overfitting cannot happen in the setup we consider, hence the two phenomena seem to rise in different setups. Explaining generalization of interpolators. As already discussed, there is a large recent body of work dedicated to understanding why over-parameterized models trained by SGD to zero training error generalize well [6, 8, and references therein]. In particular, the work of [5] aims at explaining the phenomenon for high dimensional linear models. Some recent papers investigate limitations of certain techniques in explaining generalization of interpolating algorithms: [21] show uniform convergence fails to explain generalization of SGD in a setup where the generalization gap is in fact well bounded, thus in sharp contrast to our work; [3] rule out the possibility of a large class of excess risk bounds to explain generalization of minimum norm interpolants. Unlike our work, they study properties of possible risk bounds when benign overfitting occurs, and thus do not pertain to SGD that never benignly overfits in SCO. 2 Preliminaries We consider stochastic convex optimization (SCO) specified by a population distribution Z over a datapoint set 𝑍 , and loss function 𝑓 : 𝑊 × 𝑍 → ℝ where𝑊 ⊂ ℝ𝑑 is convex and compact. We denote 𝐹 (𝑤) B 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧), (population loss) 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖), (empirical loss) where {𝑧1, . . . , 𝑧𝑛} ⊆ 𝑍 stands for the training set, which we regularly denote by 𝑆. We let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the population minimizer, and 𝑤★𝑆 B min𝑤∈𝑊 𝐹 (𝑤) denote the empirical risk minimizer (ERM). The diameter of𝑊 is defined by max𝑥,𝑦∈𝑊 {∥𝑥 − 𝑦∥} where ∥·∥ denotes the euclidean norm, and B𝑑0 (1) B { 𝑥 ∈ ℝ𝑑 | ∥𝑥∥ ≤ 1 } denotes the 𝐿2 unit ball in ℝ𝑑 . Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛 and a learning algorithm that outputs a hypothesis 𝑤𝑆 , we define the generalization gap to be the absolute value of the expected difference between test and train losses; 𝔼𝑆∼Z𝑛 [𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆)] . (generalization gap) Throughout most of the paper, we consider one-pass projected SGD over 𝑆; initialize at 𝑤1 ∈ 𝑊 ; for 𝑡 = 2, . . . , 𝑛 : 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η𝑔𝑡 ) , with 𝑔𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ), where 𝜕 𝑓 (𝑤; 𝑧) denotes the set of sub-gradients of 𝑓 (·; 𝑧) → ℝ at the point𝑤 ∈ 𝑊 , andΠ𝑊 : ℝ𝑑 → 𝑊 the projection operation onto𝑊 . 3 A generalization gap lower bound for SGD In this section, we establish our main result; that there exist convex learning problems where SGD incurs a large optimization error and therefore also a large generalization gap. When losses are convex these two quantities are closely related since in expectation, the empirical risk minimizer cannot significantly outperform the population minimizer (a claim that will be made rigorous shortly after our main theorem). Our construction builds on losses that are highly non-smooth, leading to SGD taking gradient steps that actually ascend the empirical objective. Theorem 1. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, and 𝑊 = B2𝑑0 (1). Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ such that running SGD initialized at 𝑤1 = 0, with step size η > 0 over 𝑆 ∼ Z𝑛 yields; (i) a large optimization error; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , (ii) a large generalization gap; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , where 𝑤𝑆 is any suffix average of the iterates. In particular, for η = Θ(1/ √ 𝑛), the population risk is 𝔼 [𝐹 (𝑤𝑆) − 𝐹 (𝑤★)] = 𝑂 (1/ √ 𝑛), while the generalization gap and training error are both Ω (1) . A detailed proof of Theorem 1 is deferred to the supplementary; in the following we provide an informal overview containing its principal ingredients. Proof sketch. Let 𝑍 B {0, 1}𝑑 , and consider a population distribution Z such that 𝑧(𝑖) = 1 with probability δ. We will use a loss function of the form 𝑓 (𝑤; 𝑧) B ∥𝑧 ⊙ 𝑤∥ + φ(𝑤; 𝑧), where ⊙ denotes element-wise product. The high level idea is that the norm component penalizes 𝑤’s that correlate with the given sample point 𝑧, and the φ function (the details of which are left for the supplementary) is tailored so that it drives the SGD iterates precisely to those areas in the 𝐿2 ball where it correlates with the training set {𝑧1, . . . , 𝑧𝑛}. In addition, the choice of parameters is such that the population loss is approximately zero over the entire domain. Taking 𝑑 sufficiently large compared to δ−1, we ensure that w.h.p., for every round 𝑡 ∈ [𝑛] there exist many coordinates 𝑖 ∈ [𝑑] with a prefix of ones; 𝑧1 (𝑖) = · · · = 𝑧𝑡−1 (𝑖) = 1 . With δ chosen sufficiently small compared to 𝑛, we ensure that as long as 𝑖 ∈ [𝑑] is any coordinate chosen independently of {𝑧𝑡+1, . . . , 𝑧𝑛}, w.h.p. this coordinate will have a suffix of zeros; 𝑧𝑡+1 (𝑖) = · · · = 𝑧𝑛 (𝑖) = 0. Our goal is to make SGD take steps 𝑤𝑡+1 ≈ 𝑤𝑡 − η𝑒𝑖𝑡 (where 𝑒𝑖 denotes the 𝑖’th standard basis vector) where 𝑖𝑡 ∈ [𝑑] is a coordinate with the aforementioned property of having a prefix of ones followed by a suffix of zeros. Note that since these steps are taken after the prefix of ones has ended, they will inflict large empirical loss from the norm component, but will not be “corrected” by future steps owed to the suffix of zeros. To achieve this, we design φ so that it encodes the relevant information into the SGD iterates. Specifically, φ “flags” (using some extra dimensions) all coordinates 𝑖 ∈ [𝑑] where a prefix of ones has been encountered. In addition, using another max component in φ we have that for all such coordinates 𝑖, 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧) for any example 𝑧 (as this component in the loss depends only on the iterate 𝑤𝑡 ). In particular, we get that 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ). Then, our gradient oracle just returns a subgradient pointing towards one of these coordinates (for convenience, we use the minimal one) which we denote by 𝑖𝑡 , and SGD makes the desired step. Notably, the coordinate 𝑖𝑡 chosen by the subgradient oracle is independent of future examples, and therefore will have a suffix of zeros w.h.p. Hence, as mentioned, this ensures no gradient signal after round 𝑡 will be able to correct the empirical risk ascent on 𝑖𝑡 . Concluding, we have that for the final iterate 𝑤 B 𝑤𝑛+1, we get 𝑤(𝑖𝑡 ) = −η for all 𝑡 ∈ [𝑛], therefore 𝐹 (𝑤) = 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖) ≈ 1 𝑛 𝑛∑︁ 𝑖=1 ∥𝑧𝑖 ⊙ 𝑤∥ ≈ ∥𝑤∥ ≈ √︃ η2𝑛 = η √ 𝑛. A similar argument requiring a few more technical steps shows the same is true for any suffix average 𝑤. Noting that 𝐹 (0) = 0, we get that the optimization error is Ω(η √ 𝑛). The implication for the generalization gap follows immediately with the standard step size choice of η = 1/ √ 𝑛, owed to SGD’s population risk convergence guarantee. For an arbitrary step size, the result follows from a simple computation, and the proof is concluded. □ The magnitude of the generalization gap featured in Theorem 1 stems from the large optimization error, which results in the empirical risk over-estimating the population risk by a large margin. Evidently, for convex losses the converse is always false; the empirical risk will never significantly under-estimate the population risk (a fact that will turn out false when losses are only required to be convex in expectation — see Section 3.1). Indeed, stability of the regularized ERM solution implies the ERM does not perform significantly better on the training set compared to the population minimizer 𝑤★. Lemma 1. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ convex and 𝐺-Lipschitz in the first argument. Then 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★ 𝑆 ) ] ≤ 4𝐺𝐷√ 𝑛 . Proof. Denote the regularized ERM by 𝑤λ 𝑆 B arg min𝑤∈𝑊 { 1 𝑛 ∑𝑛 𝑖=1 𝑓𝑖 (𝑤; 𝑧𝑖) + λ2 ∥𝑤∥ 2} . Observe, 𝐹 (𝑤★) ≤ 𝔼𝐹 (𝑤λ𝑆) ≤ 𝔼𝐹 (𝑤 λ 𝑆) + 4𝐺2 λ𝑛 ≤ 𝔼𝐹 (𝑤★𝑆) + λ 2 𝐷2 + 4𝐺 2 λ𝑛 , where the second inequality follows from stability of the regularized ERM (see Lemma 13). Choosing λ B 2𝐺𝐷/ √ 𝑛, we get that 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] = 𝐹 (𝑤★) − 𝔼𝐹 (𝑤★𝑆) ≤ 4𝐺𝐷 √ 𝑛 , as claimed. □ Since the optimization error is always positive, we see that the upper bound given by Lemma 1 implies an upper bound on the difference between the population and empirical risks. Corollary 1. For any distribution Z over 𝑍 and Lipschitz loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex in the first argument, running SGD with step size η B 1/ √ 𝑛 guarantees 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] ≤ 𝑂 (1/ √ 𝑛). Proof. We have, 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★) ] + 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] The population error term on the RHS is 𝑂 (1/ √ 𝑛) by the classical analysis of SGD. The second term is bounded by Lemma 1; 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] ≤ 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] ≤ 4𝐺𝐷/ √ 𝑛, and the result follows. □ In the subsections that follow we continue to study the generalization gap in the context of common variants to the basic SCO setup. 3.1 SCO with non-convex components When we relax the convexity assumption and only require the losses to be convex in expectation, we can construct a learning problem where SGD exhibits a case of benign overfitting. In contrast to Theorem 1, here we actually drive the SGD iterates towards an ERM solution, thus achieving a low optimization error and an empirical risk that under-estimates the population risk. Theorem 2. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and η ≤ 1/ √ 𝑛. Then there exists a distribution Z over 𝑍 and a 4-Lipschitz loss 𝑓 : 𝑊 × 𝑍 → ℝ where 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧) is convex in 𝑤, such that for any suffix average 𝑤 of SGD initialized at 𝑤1 = 0, with step size η; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω(η2𝑛). The construction and proof of Theorem 2 given in the supplementary follow a methodology similar to that of Theorem 1. Here however, we exploit non convex losses to form an empirical loss landscape where the ERM solution significantly outperforms the population minimizer 𝑤★ (notably, a feat not possible when losses are individually convex, by Corollary 1). Our loss function is defined by 𝑓 (𝑤; 𝑧) B ∑𝑑 𝑖=1 𝑧(𝑖)𝑤(𝑖)2 + φ(𝑤; 𝑧), with each component playing a similar role as before. We work with the distribution 𝑧 ∼ {0, 1}𝑑 where 𝑧(𝑖) = 1 w.p. δ, 𝑧(𝑖) = −1 w.p. δ, and 𝑧(𝑖) = 0 w.p. 1 − 2δ. The intuition is that coordinates accumulating many −1’s offer regions in the 𝐿2 ball where the empirical risk is “too good” compared to the population risk. We tailor the extra dimensions and φ in coordination with the −1 values so that the sub-gradients guide the SGD iterates towards these regions, in exactly the same manner the construction of Theorem 1 drives the iterates to high loss regions. We note that while the statement of Theorem 2 is specialized to step size smaller than 1/ √ 𝑛, it may be extended to any step size using arguments similar to those given in the proof of Theorem 1. 3.2 SCO with strongly convex components Our basic construction extends to the strongly convex case by making only technical modification to Theorem 1. The theorem below concerns the standard step size choice for strongly convex objectives. We provide its proof in the supplementary. Theorem 3. Let 𝑛 ∈ ℕ, 𝑛 ≥ 10, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and λ ≥ 1/ √ 𝑛. Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz, λ-strongly convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ (i) the optimization error is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( 1 λ √ 𝑛 ) , (ii) the generalization gap is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( 1 λ √ 𝑛 ) , where 𝑤𝑆 is any suffix average of SGD initialized at 𝑤1 = 0, with step size schedule η𝑡 = 1/λ𝑡. Furthermore, the problem instance where this occurs is precisely the λ regularized version of the example featured in Theorem 1. We note that an immediate implication of the above theorem is that if we seek a generalization gap upper bound for a weakly convex problem by means of regularization (meaning, by running SGD on a regularized problem), we would have to take λ ≥ 1 to guarantee a gap of 𝑂 (1/ √ 𝑛). To see this, note that the generalization gap (of any hypothesis) of the regularized problem is the same as that of the original. On the other hand, taking λ ≥ 1 will of course be detrimental to the population error guarantee. Hence, one cannot circumvent the generalization gap lower bound by regularization without compromising the population error. We conclude this section with a note regarding stability rates of SGD in non-smooth SCO. Implicit in Theorem 1, is that average stability of SGD coincides with the tight uniform stability rate of Θ(η √ 𝑛) established by [7]. This is because Theorem 1 provides the Ω(η √ 𝑛) lower bound on the most general stability notion, which is precisely the generalization gap [32]. We refer the reader to the supplementary for a more elaborate discussion. 4 SGD with vs without replacement In this section, we consider a different algorithm in the context of the basic SCO setup; SGD over examples drawn with-replacement from the training set. This is not to be confused with one-pass SGD discussed in Section 3, which corresponds to without-replacement SGD on the training set, or alternatively with-replacement SGD over the population distribution. Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛, we define with-replacement projected SGD initialized at 𝑤1 ∈ 𝑊 by 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η̂𝑡 ) , where ̂𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; ̂𝑡 ) and ̂𝑡 ∼ Unif (𝑆). Perhaps surprisingly, this version of SGD does not overfit the training data; our theorem below establishes that with proper iterate averaging, the population risk converges at the optimal rate. Theorem 4. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z be any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ be convex and 𝐺-Lipschitz in the first argument. Let 𝑆 ∼ Z𝑛 be a training set of 𝑛 ∈ ℕ datapoints drawn i.i.d. from Z, and consider running SGD over training examples sampled with-replacement, uniformly and independently from 𝑆. Then, for step size η = 𝐷 𝐺 √ 𝑛 and 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 , the following upper bound holds; 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷√ 𝑛 . Proof. Fix a time-step 𝑡 ∈ [𝑛], and observe that if we don’t condition on 𝑆, we may view the random datapoint ̂𝑡 as a mixture between a fresh i.i.d. sample from the population and a uniformly distributed sample from the previously processed datapoints 𝑆𝑡−1 B {̂1, . . . , ̂𝑡−1}; ̂𝑡 | 𝑆𝑡−1 = { 𝑧 ∼ Z w.p. 1 − 𝑡−1 𝑛 , 𝑧 ∼ Unif (𝑆𝑡−1) w.p. 𝑡−1𝑛 . With this in mind, denote ̂𝑡 (𝑤) B 𝑓 (𝑤; ̂𝑡 ), fix 𝑆𝑡−1 and observe: 𝔼̂𝑡 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) | 𝑆𝑡−1 ] = ( 1 − 𝑡 − 1 𝑛 ) 𝔼𝑧∼Z [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] + 𝑡 − 1 𝑛 1 𝑡 − 1 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤𝑡 ) − ̂𝑖 (𝑤★). Rearranging and taking expectation with respect to 𝑆𝑡−1 we obtain( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] = 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 𝔼 [ 1 𝑛 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤★) − ̂𝑖 (𝑤𝑡 ) ] ≤ 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 4𝐺𝐷 √ 𝑡 𝑛 , (2) where the inequality follows from Lemma 1. Now, by a direct computation we have ∑𝑛 𝑡=1 ( 1 − 𝑡−1 𝑛 ) = 𝑛+1 2 , which motivates setting 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 . By convexity of 𝐹, Eq. (2), and the standard regret analysis of gradient descent [e.g., 15] we now have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 ( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝐹 (𝑤𝑡 ) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 2 𝑛 + 1 𝑛∑︁ 𝑡=1 4𝐺𝐷 √ 𝑡 𝑛 ≤ 2 𝑛 𝔼 [ 𝑛∑︁ 𝑡=1 ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 8𝐺𝐷√ 𝑛 ≤ 2 𝑛 ( 𝐷2 2η + η𝐺 2 2 ) + 8𝐺𝐷√ 𝑛 = 10𝐺𝐷 √ 𝑛 , where the last inequality follows by our choice of η = 𝐷 𝐺 √ 𝑛 . □ Evidently, the averaging scheme dictated by Theorem 4 does little to hurt the empirical risk convergence guarantee, which follows from the standard analysis with little modifications (for completeness we provide a formal statement and proof in the supplementary). Combined with Lemma 1, this immediately implies a generalization gap upper bound for with-replacement SGD. Notably, this shows with-replacement SGD provides for an example of a (natural) algorithm in the SCO learning setup that is not even stable on-average, but nonetheless has a well bounded generalization gap. We refer the reader to the discussion in the supplementary for more details. Corollary 2. For any distribution Z and loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex and Lipschitz in the first argument, running SGD with step size and averaging as specified in Theorem 4 ensures 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝑂 (1/√𝑛). Proof. We have; 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤★)] + 𝔼[𝐹 (𝑤★) − 𝐹 (𝑤★𝑆)] + 𝔼[𝐹 (𝑤★𝑆) − 𝐹 (𝑤)] . The first term is upper bounded by convergence of the population risk provided by Theorem 4, the second by Lemma 1, and the third by the standard analysis of SGD (see the supplementary). □ 5 Multi-epoch SGD for empirical risk minimization In this section, we forgo the existence of a population distribution and discuss convergence properties of without-replacement SGD (wor-SGD) for finite sum optimization problems. A relatively long line of work discussed in the introduction studies this problem in the smooth case. The work of [20] noted smoothness is a necessary assumption to obtain rates that are strictly better than the 𝑂 (1/ √ 𝑛𝐾) guaranteed by with-replacement SGD for 𝑛 losses and 𝐾 epochs, due to a lower bound that follows from the deterministic case (e.g., [10]). Here we establish that smoothness is in fact necessary to obtain rates that are not strictly worse than with-replacement SGD. We consider running multiple passes of wor-SGD to solve the finite sum optimization problem given by the objective 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑡=1 𝑓 (𝑤; 𝑡) (3) where { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 is a set of 𝑛 convex, 𝐺-Lipschitz losses defined over a convex and compact domain 𝑊 ⊆ ℝ𝑑 . Throughout this section we let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the minimizer of the objective Eq. (3). In every epoch 𝑘 ∈ [𝐾] we process the losses in the order specified by a permutation π𝑘 : [𝑛] ↔ [𝑛] sampled uniformly at random, either once in the beginning of the algorithm (single-shuffle), or at the onset of every epoch (multi-shuffle). Multi-epoch wor-SGD initialized at 𝑤11 ∈ 𝑊 is specified by the following equations; 𝑤𝑘𝑡+1 ← Π𝑊 (𝑤 𝑘 𝑡 − η𝑔𝑘𝑡 ), where 𝑔𝑘𝑡 ∈ 𝜕 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) 𝑤𝑘+11 B 𝑤 𝑘 𝑛+1, where we denote 𝑓 𝑘𝑡 (𝑤) B 𝑓 (𝑤;π𝑘 (𝑡)). A near-immediate implication of Theorem 1 is that there exists a set of convex losses on which a single epoch of wor-SGD cannot converge at a rate faster than 1/𝑛1/4. Theorem 5 presented below extends our basic construction from Theorem 1 to accommodate multiple epochs. The main challenge here is in devising a mechanism that will allow fresh bad gradient steps to take place on every new epoch. Theorem 5. Let 𝑛, 𝐾 ∈ ℕ, 𝐾 ≥ 4, 𝑛 ≥ 4, 𝑐 B 4/(21/𝐾 − 1), 𝑑 ≥ 26𝑛 log(𝑐𝑛𝐾) , and 𝑊 = B𝑑′0 (1) where 𝑑 ′ = (𝑛𝐾 + 1)𝑑. Then there exists a set of 𝑛 convex, 4-Lipschitz losses such that after 𝐾 epochs of either multi-shuffle or single-shuffle SGD initialized at 𝑤11 = 0 with step size η ≤ 1/ √ 2𝑛𝐾 , it holds that 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤∗)] = Ω ( min { 1, η √︂ 𝑛 𝐽 + 1 η𝑛𝐾 + η }) , where 𝑤 is any suffix average of the last 𝐽 epochs. In particular, we obtain a bound of Ω ( 𝑛−1/4𝐾−3/4 ) for any suffix average and any choice of η. The proof of Theorem 5 is provided in the supplementary. The construction in the proof takes the idea that the training set can be encoded in the SGD iterate to the extreme. The loss function and gradient oracle are designed in such a way so as to record the training examples in their full form and order into the iterate. We then exploit this encoded information with an “adversarial” gradient oracle that returns the bad sub-gradients on each gradient step in every new epoch. Next, we complement Theorem 5 with an upper bound that builds on stability arguments similar to those of the smooth case [20]. Importantly though, lack of smoothness means worse stability rates and necessitates extra care in the technical arguments. Below, we prove the multi-shuffle case, and defer the full details for the single-shuffle case to the supplementary. Theorem 6. Let 𝑆 = { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 be a set of 𝑛 convex, 𝐺-Lipschitz losses over a convex and compact domain𝑊 ⊆ ℝ𝑑 of diameter 𝐷, and consider running 𝐾 ≥ 1 epochs of wor-SGD over 𝑆. Then, we have the following guarantees: (i) For multi-shuffle, with step-size η = 𝐷/(𝐺𝑛3/4𝐾1/2), we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 . (ii) For single-shuffle, with step-size η = 𝐷/(2𝐺𝑛3/4𝐾3/4) and assuming 𝐾 ≥ 𝑛, we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷 𝑛1/4𝐾1/4 . In both of the above bounds, 𝑤 = 1 𝑛𝐾 ∑ 𝑘∈[𝐾 ],𝑡 ∈[𝑛] 𝑤 𝑘 𝑡 , and the expectation is over the random permutations of losses. Proof (multi-shuffle case). Observe; 𝐹 (𝑤) − 𝐹 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝐹 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 𝐷2 2η𝑛𝐾 + η𝐺 2 2 , with the last inequality following from the standard 𝑛𝐾 round regret bound for gradient descent [see e.g., 15]. To bound the other term, using Lemma 10, we relate the difference between the without-replacement loss distribution and the full batch objective to the uniform stability rate of SGD, which may then be bounded by applying Lemma 11: 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 )) ] = 𝔼π1 ,...,π𝑘−1𝔼π𝑘 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) | 𝑤𝑘1 ] ≤ 𝔼π1 ,...,π𝑘−1 [ 𝐺ϵSGDstab (𝑡 − 1) ] = 𝐺ϵSGDstab (𝑡 − 1) ≤ 2η𝐺2 √ 𝑡. Concluding, we have that 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) ] + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 η𝐺2 √ 𝑡 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2η𝐺2 √ 𝑛 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 , where the last inequality follows from our choice of η = 𝐷/(𝐺𝑛3/4𝐾1/2). □ Acknowledgements and funding disclosure This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 882396), by the Israel Science Foundation (grants number 993/17, 2549/19, 2188/20), by the Len Blavatnik and the Blavatnik Family foundation, by the Yandex Initiative in Machine Learning at Tel Aviv University, by a grant from the Tel Aviv University Center for AI and Data Science (TAD), and by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the focus of the paper regarding Stochastic Gradient Descent (SGD)? 2. What are the strengths of the paper's contributions to SGD research? 3. What are the weaknesses of the paper, particularly regarding its lack of discussion and conclusions? 4. Are there any concerns or limitations in the paper's approach or findings that could have negative societal implications?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper shows, by considering the stochastic convex optimization framework, that there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω ( 1 ) (in the without replacement case). SGD is therefore not algorithmically stable. This phenomenon does not occur for SGD with replacement. Strengths And Weaknesses Strengths: the phenomenon reported in the paper is novel, intriguing and potentially impactful in the context of SGD generalization bounds analytical evidence is solid the multi-epoch regime is addressed Weaknesses: nor discussions nor conclusions are present Questions discussions and conclusions are not present Minors: l338: forgo Limitations Limitations and potential negative societal impact of this work may be discussed more
NIPS
Title Benign Underfitting of Stochastic Gradient Descent Abstract We study to what extent may stochastic gradient descent (SGD) be understood as a “conventional” learning rule that achieves generalization performance by obtaining a good fit to training data. We consider the fundamental stochastic convex optimization framework, where (one pass, without-replacement) SGD is classically known to minimize the population risk at rate O (1/ √ n), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. N/A √ 𝑛), and prove that, surprisingly, there exist problem instances where the SGD solution exhibits both empirical risk and generalization gap of Ω(1). Consequently, it turns out that SGD is not algorithmically stable in any sense, and its generalization ability cannot be explained by uniform convergence or any other currently known generalization bound technique for that matter (other than that of its classical analysis). We then continue to analyze the closely related with-replacement SGD, for which we show that an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate. Finally, we interpret our main results in the context of without-replacement SGD for finite-sum convex optimization problems, and derive upper and lower bounds for the multi-epoch regime that significantly improve upon previously known results. 1 Introduction Conventional wisdom in statistical learning revolves around what is traditionally known as the bias-variance dilemma; the classical theory stipulates the quality of fit to the training data be in a trade-off with model complexity, aiming for a sweet spot where training error is small but yet representative of performance on independent test data. This perspective is reflected in the vast majority of generalization bound techniques offered by contemporary learning theory. Uniform convergence approaches [36, 4] seek capacity control over the model function class, and employ uniform laws of large numbers to argue convergence of sample averages to their respective expectations. Algorithmic stability [9, 32] on the other hand, builds on controlling sensitivity of the learning algorithm to small changes in its input, and provides algorithm dependent bounds. Nevertheless, despite the conceptual and technical differences between these two methods, both ultimately produce risk bounds by controlling the training error, and the generalization gap. The same is true for many other techniques, including sample compression [17, 2], PAC-Bayes [18, 12], and information theoretic generalization bounds [29, 37, 24], to name a few. In recent years it has become clear there are other, substantially different, ways to manage the fit vs. complexity trade-off, that are in a sense incompatible with traditional generalization bound techniques. Evidently, heavily over-parameterized deep neural networks may be trained to perfectly 36th Conference on Neural Information Processing Systems (NeurIPS 2022). fit training data and generalize well nonetheless [38, 25, 26], thus seemingly disobeying conventional statistical wisdom. This phenomenon has garnered significant attention, with a flurry of research works dedicated to developing new techniques that would be able to explain strong generalization performance of algorithms in this so called interpolation regime (see 6, 8 and references therein). Notably, while these algorithms do not strike a balance between model complexity and fit to the data in the traditional sense, fundamentally, they still minimize the empirical risk as a proxy to test performance. To summarize, in the classical and modern regimes alike, learning methods are thought of as minimizing some combination of the training error and generalization gap, with reasoning that relies in one way or another on the following trivial, yet arguably most profound, bound: test-error ≤ train-error + |generalization gap| . (1) In this work, we focus on stochastic gradient descent (SGD)—the canonical algorithm for training machine learning models nowadays—and ask whether its generalization performance can be understood through a similar lens. We consider the fundamental stochastic convex optimization (SCO) framework, in which it is well known that SGD minimizes the population risk at a rate of 𝑂 (1/ √ 𝑛) [23]. Remarkably, the classical analysis targets the population risk directly, and in contrast with other generalization arguments, at least seemingly does not rely on the above bound. This highlights an intriguing question: Are these quantities, so fundamental to learning theory, relevant to the way that SGD “works”? Put differently, is it possible to provide a more “conventional" analysis of SGD that conforms with (1)? Our main result shows that, perhaps surprisingly, there exist convex learning problems where the above bound becomes vacuous for SGD: namely, SGD minimizes the population risk, but at the same time, it does not minimize the empirical risk and thus exhibits constant generalization gap. This accords neither with the traditional viewpoint nor with that of interpolation, as both recognize the empirical risk as the principal minimization objective. We refer to this phenomenon as benign underfitting: evidently, SGD underfits the training data, but its classical analysis affirms this underfitting to be benign, in the sense that test performance is never compromised as a result. Our construction presents a learning problem where the output of SGD with step size η over 𝑛 i.i.d. training examples isΩ(η √ 𝑛) sub-optimal w.r.t. the best fit possible, and consequently has a generalization gap of the same order. Notably, with the standard step size choice of 1/ √ 𝑛 necessary to ensure the population risk converges at the optimal rate this lower bound amounts to a constant. Many previously plausible explanations for generalization properties of this algorithm are thereby rendered inadequate, at least in the elementary convex setup we consider here. First, it is clear that SGD cannot be framed as any reasonable regularized empirical risk minimization procedure for the simple reason that it does not minimize the empirical risk, which challenges the implicit regularization viewpoint to the generalization of SGD. Second, any attempt to explain generalization of SGD by uniform convergence over any (possibly data-dependent) hypotheses set cannot hold, simply because the sample average associated with the very same training set SGD was trained on is not necessarily close to its respective expectation. Finally, as it turns out, SGD provides for a strikingly natural example of an algorithm that generalizes well but is not stable in any sense, as the most general notion of algorithmic stability is entirely equivalent to the generalization gap [32]. We then move on to study the generalization gap and empirical risk guarantees of SGD in a broader context. We study the case of non-convex and strongly convex component functions, and present natural extensions of our basic result. In addition, we analyse the variant of SGD where datapoints are sampled with-replacement from the training set, in which case the train error is of course low but perhaps surprisingly the population risk is well behaved. Finally, we make the natural connection to the study of without-replacement SGD for empirical risk minimization, and derive upper and lower bounds for the multi-epoch regime. These last two points are discussed in further detail in the following. With vs without-replacement SGD. We may view one-pass SGD as processing the data via without-replacement sampling from the training set, as randomly reshuffling the examples does not change their unconditional distribution. Thus, it is interesting to consider the generalization gap of the closely related algorithm given by running SGD over examples sampled with-replacement from the training set. Considering instability (see the supplementary for a detailed discussion) of SGD for non-smooth losses and the fact that this variant targets the empirical objective, a priori it would seem this algorithm would overfit the training set and not provide strong population risk guarantees. Surprisingly, our analysis presented in Section 4 reveals this is not the case, and that with a certain iterate averaging scheme the population risk converges at the optimal rate. Consequently, it turns out the generalization gap is well bounded, and therefore that this variant constitutes a natural learning rule that is not stable in any sense but the most general one. Without-replacement SGD for empirical risk minimization. The example featured in our main construction implies a lower bound of Ω(𝑛−1/4) on the convergence rate of a single epoch of withoutreplacement SGD for finite sum optimization problems. In this setting, we have a set of 𝑛 convex losses and we wish to minimize their sum by running SGD over random shufflings of the losses. While the smooth case has been studied extensively (e.g., [28, 27, 20, 31]), the non-smooth case has hardly received much attention. In Section 5 we extend our basic construction to a lower bound for the multi-epoch regime, and complement it with nearly matching upper bounds. Our techniques. Fundamentally, we exploit the fact that dimension independent uniform convergence does not hold in SCO [32]. This is a prerequisite to any attempt at separating train and test losses of any hypothesis vector, let alone that produced by SGD. Another essential condition is the instability of SGD for non-smooth losses, as any form of stability would immediately imply a generalization gap upper bound regardless of uniform convergence. Our main lower bound draws inspiration from constructions presented in the works of [7] and [1], both of which rely on instability, the latter also exploiting failure of uniform convergence. However, neither of these contains the main ideas necessary to provoke the optimization dynamics required in our example. A crucial ingredient in our construction consists of encoding into the SGD iterate information about previous training examples. This, combined with careful design of the loss function, gradient oracle and population distribution, allows correlating sub-gradients of independent training examples, and in turn guiding the SGD iterates to ascend the empirical risk. 1.1 Summary of main contributions To summarize, the main contributions of the paper are as follows: • One-pass SGD in SCO. In Section 3, we study the basic SCO setup where the component losses are assumed to be individually convex, and present a construction where the expected empirical risk and therefore the generalization gap are both Ω(η √ 𝑛). We also provide extensions of our main construction demonstrating; – SCO with non-convex component functions may exhibit cases of benign overfitting, where 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤) ] = Ω(η2𝑛). – In SCO with λ-strongly convex losses the worst case generalization gap is Ω(1/λ √ 𝑛) for the standard step size choice. • With vs without replacement SGD in SCO. In Section 4, we prove the variant of SGD where the training examples are processed via sampling with-replacement from the training set minimizes the population risk at the optimal rate, and thus enjoys a generalization gap upper bound bound of 𝑂 (1/ √ 𝑛). • Multi-epoch without-replacement SGD. In Section 5, we study convergence rates of withoutreplacement SGD for finite sum convex optimization problems. We prove a lower bound of Ω(𝑛−1/4𝐾−3/4) on the optimization error after 𝐾 epochs over 𝑛 convex losses, and complement with upper bounds of 𝑂 (𝑛−1/4𝐾−1/2) and 𝑂 (𝑛−1/4𝐾−1/4) for respectively the multi-shuffle and single-shuffle SGD variants. 1.2 Additional related work Gradient descent, algorithmic stability and generalization. Closely related to our work is the study of stability properties of SGD. For smooth losses, [14] provide upper bounds on the generalization gap by appealing to uniform stability, yielding an 𝑂 (1/ √ 𝑛) rate for a single epoch of 𝑛 convex losses and the standard step size choice. In a later work, [7] prove tight rates for uniform stability of SGD in the setting of non-smooth losses, establishing these scale substantially worse; Θ(η √ 𝑛) for step size η and 𝑛 training examples. Our work shows that in fact the worst case rate of the generalization gap completely coincides with the uniform stability rate of SGD. A number of works prior to ours studied the extent to which SGD can be explained by implicit regularization in SCO. [16] study the setup where losses are smooth but only required to be convex in expectation, and show SGD may successfully learn when regularized ERM does not. Prior to their work, [11] also rule out a wide range of implicit regularization based explanations of SGD in the basic SCO setup with convex losses. On a more general level, our work is related to the study of stability and generalization in modern learning theory, pioneered by [9, 32]. In particular, the failure of (dimension independent) uniform convergence in SCO was established in [32]. The work of [13] improves the dimension dependence in the construction of [32] from exponential to linear in the number of training examples. Notably, the construction featured in our main result requires the dimension to be exponential in the sample size, however the techniques of [13] do not readily extend to our setting. Thus, the optimal dimension dependence for a generalization gap lower bound is left for future work. Without-replacement SGD for empirical risk minimization. A relatively long line of work studies convergence properties of without-replacement SGD from a pure optimization perspective (e.g., [28, 20, 30, 27, 19, 31]). Nearly all the papers in this line of work adopt the smoothness assumption, with near optimal bounds established by [20]. An exception is the paper of [33] where an 𝑂 (1/ √ 𝑛𝐾) upper bound is obtained for 𝑛 datapoints and 𝐾 epochs, albeit only for generalized linear models over a bounded domain — notably, a setting where uniform convergence holds. Prior to this thread of research, [22] prove a convergence rate of 𝑂 (𝑛/ √ 𝐾) for non-smooth loss functions that applies for any ordering of the losses. To the best of our knowledge, this is also the state-of-the-art result for without-replacement SGD in the non-smooth setting without further assumptions on the loss functions. Benign overfitting vs. benign underfitting. While both benign underfitting and benign overfitting challenge traditional generalization techniques, that postulate the training error to represent the test error, as we discuss above these two phenomena point to very different regimes of learning. In particular, [34] shows that benign overfitting requires distributional assumptions for the interpolating algorithm to succeed. In contrast, we show that benign underfitting happens for SGD in a setting where it provably learns (namely, SCO), without any distributional assumptions. We also point out that Corollary 1 shows benign overfitting cannot happen in the setup we consider, hence the two phenomena seem to rise in different setups. Explaining generalization of interpolators. As already discussed, there is a large recent body of work dedicated to understanding why over-parameterized models trained by SGD to zero training error generalize well [6, 8, and references therein]. In particular, the work of [5] aims at explaining the phenomenon for high dimensional linear models. Some recent papers investigate limitations of certain techniques in explaining generalization of interpolating algorithms: [21] show uniform convergence fails to explain generalization of SGD in a setup where the generalization gap is in fact well bounded, thus in sharp contrast to our work; [3] rule out the possibility of a large class of excess risk bounds to explain generalization of minimum norm interpolants. Unlike our work, they study properties of possible risk bounds when benign overfitting occurs, and thus do not pertain to SGD that never benignly overfits in SCO. 2 Preliminaries We consider stochastic convex optimization (SCO) specified by a population distribution Z over a datapoint set 𝑍 , and loss function 𝑓 : 𝑊 × 𝑍 → ℝ where𝑊 ⊂ ℝ𝑑 is convex and compact. We denote 𝐹 (𝑤) B 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧), (population loss) 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖), (empirical loss) where {𝑧1, . . . , 𝑧𝑛} ⊆ 𝑍 stands for the training set, which we regularly denote by 𝑆. We let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the population minimizer, and 𝑤★𝑆 B min𝑤∈𝑊 𝐹 (𝑤) denote the empirical risk minimizer (ERM). The diameter of𝑊 is defined by max𝑥,𝑦∈𝑊 {∥𝑥 − 𝑦∥} where ∥·∥ denotes the euclidean norm, and B𝑑0 (1) B { 𝑥 ∈ ℝ𝑑 | ∥𝑥∥ ≤ 1 } denotes the 𝐿2 unit ball in ℝ𝑑 . Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛 and a learning algorithm that outputs a hypothesis 𝑤𝑆 , we define the generalization gap to be the absolute value of the expected difference between test and train losses; 𝔼𝑆∼Z𝑛 [𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆)] . (generalization gap) Throughout most of the paper, we consider one-pass projected SGD over 𝑆; initialize at 𝑤1 ∈ 𝑊 ; for 𝑡 = 2, . . . , 𝑛 : 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η𝑔𝑡 ) , with 𝑔𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ), where 𝜕 𝑓 (𝑤; 𝑧) denotes the set of sub-gradients of 𝑓 (·; 𝑧) → ℝ at the point𝑤 ∈ 𝑊 , andΠ𝑊 : ℝ𝑑 → 𝑊 the projection operation onto𝑊 . 3 A generalization gap lower bound for SGD In this section, we establish our main result; that there exist convex learning problems where SGD incurs a large optimization error and therefore also a large generalization gap. When losses are convex these two quantities are closely related since in expectation, the empirical risk minimizer cannot significantly outperform the population minimizer (a claim that will be made rigorous shortly after our main theorem). Our construction builds on losses that are highly non-smooth, leading to SGD taking gradient steps that actually ascend the empirical objective. Theorem 1. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, and 𝑊 = B2𝑑0 (1). Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ such that running SGD initialized at 𝑤1 = 0, with step size η > 0 over 𝑆 ∼ Z𝑛 yields; (i) a large optimization error; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , (ii) a large generalization gap; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( min { η √ 𝑛, 1 η √ 𝑛 }) , where 𝑤𝑆 is any suffix average of the iterates. In particular, for η = Θ(1/ √ 𝑛), the population risk is 𝔼 [𝐹 (𝑤𝑆) − 𝐹 (𝑤★)] = 𝑂 (1/ √ 𝑛), while the generalization gap and training error are both Ω (1) . A detailed proof of Theorem 1 is deferred to the supplementary; in the following we provide an informal overview containing its principal ingredients. Proof sketch. Let 𝑍 B {0, 1}𝑑 , and consider a population distribution Z such that 𝑧(𝑖) = 1 with probability δ. We will use a loss function of the form 𝑓 (𝑤; 𝑧) B ∥𝑧 ⊙ 𝑤∥ + φ(𝑤; 𝑧), where ⊙ denotes element-wise product. The high level idea is that the norm component penalizes 𝑤’s that correlate with the given sample point 𝑧, and the φ function (the details of which are left for the supplementary) is tailored so that it drives the SGD iterates precisely to those areas in the 𝐿2 ball where it correlates with the training set {𝑧1, . . . , 𝑧𝑛}. In addition, the choice of parameters is such that the population loss is approximately zero over the entire domain. Taking 𝑑 sufficiently large compared to δ−1, we ensure that w.h.p., for every round 𝑡 ∈ [𝑛] there exist many coordinates 𝑖 ∈ [𝑑] with a prefix of ones; 𝑧1 (𝑖) = · · · = 𝑧𝑡−1 (𝑖) = 1 . With δ chosen sufficiently small compared to 𝑛, we ensure that as long as 𝑖 ∈ [𝑑] is any coordinate chosen independently of {𝑧𝑡+1, . . . , 𝑧𝑛}, w.h.p. this coordinate will have a suffix of zeros; 𝑧𝑡+1 (𝑖) = · · · = 𝑧𝑛 (𝑖) = 0. Our goal is to make SGD take steps 𝑤𝑡+1 ≈ 𝑤𝑡 − η𝑒𝑖𝑡 (where 𝑒𝑖 denotes the 𝑖’th standard basis vector) where 𝑖𝑡 ∈ [𝑑] is a coordinate with the aforementioned property of having a prefix of ones followed by a suffix of zeros. Note that since these steps are taken after the prefix of ones has ended, they will inflict large empirical loss from the norm component, but will not be “corrected” by future steps owed to the suffix of zeros. To achieve this, we design φ so that it encodes the relevant information into the SGD iterates. Specifically, φ “flags” (using some extra dimensions) all coordinates 𝑖 ∈ [𝑑] where a prefix of ones has been encountered. In addition, using another max component in φ we have that for all such coordinates 𝑖, 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧) for any example 𝑧 (as this component in the loss depends only on the iterate 𝑤𝑡 ). In particular, we get that 𝑒𝑖 ∈ 𝜕 𝑓 (𝑤𝑡 ; 𝑧𝑡 ). Then, our gradient oracle just returns a subgradient pointing towards one of these coordinates (for convenience, we use the minimal one) which we denote by 𝑖𝑡 , and SGD makes the desired step. Notably, the coordinate 𝑖𝑡 chosen by the subgradient oracle is independent of future examples, and therefore will have a suffix of zeros w.h.p. Hence, as mentioned, this ensures no gradient signal after round 𝑡 will be able to correct the empirical risk ascent on 𝑖𝑡 . Concluding, we have that for the final iterate 𝑤 B 𝑤𝑛+1, we get 𝑤(𝑖𝑡 ) = −η for all 𝑡 ∈ [𝑛], therefore 𝐹 (𝑤) = 1 𝑛 𝑛∑︁ 𝑖=1 𝑓 (𝑤; 𝑧𝑖) ≈ 1 𝑛 𝑛∑︁ 𝑖=1 ∥𝑧𝑖 ⊙ 𝑤∥ ≈ ∥𝑤∥ ≈ √︃ η2𝑛 = η √ 𝑛. A similar argument requiring a few more technical steps shows the same is true for any suffix average 𝑤. Noting that 𝐹 (0) = 0, we get that the optimization error is Ω(η √ 𝑛). The implication for the generalization gap follows immediately with the standard step size choice of η = 1/ √ 𝑛, owed to SGD’s population risk convergence guarantee. For an arbitrary step size, the result follows from a simple computation, and the proof is concluded. □ The magnitude of the generalization gap featured in Theorem 1 stems from the large optimization error, which results in the empirical risk over-estimating the population risk by a large margin. Evidently, for convex losses the converse is always false; the empirical risk will never significantly under-estimate the population risk (a fact that will turn out false when losses are only required to be convex in expectation — see Section 3.1). Indeed, stability of the regularized ERM solution implies the ERM does not perform significantly better on the training set compared to the population minimizer 𝑤★. Lemma 1. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ convex and 𝐺-Lipschitz in the first argument. Then 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★ 𝑆 ) ] ≤ 4𝐺𝐷√ 𝑛 . Proof. Denote the regularized ERM by 𝑤λ 𝑆 B arg min𝑤∈𝑊 { 1 𝑛 ∑𝑛 𝑖=1 𝑓𝑖 (𝑤; 𝑧𝑖) + λ2 ∥𝑤∥ 2} . Observe, 𝐹 (𝑤★) ≤ 𝔼𝐹 (𝑤λ𝑆) ≤ 𝔼𝐹 (𝑤 λ 𝑆) + 4𝐺2 λ𝑛 ≤ 𝔼𝐹 (𝑤★𝑆) + λ 2 𝐷2 + 4𝐺 2 λ𝑛 , where the second inequality follows from stability of the regularized ERM (see Lemma 13). Choosing λ B 2𝐺𝐷/ √ 𝑛, we get that 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] = 𝐹 (𝑤★) − 𝔼𝐹 (𝑤★𝑆) ≤ 4𝐺𝐷 √ 𝑛 , as claimed. □ Since the optimization error is always positive, we see that the upper bound given by Lemma 1 implies an upper bound on the difference between the population and empirical risks. Corollary 1. For any distribution Z over 𝑍 and Lipschitz loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex in the first argument, running SGD with step size η B 1/ √ 𝑛 guarantees 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] ≤ 𝑂 (1/ √ 𝑛). Proof. We have, 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★) ] + 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] The population error term on the RHS is 𝑂 (1/ √ 𝑛) by the classical analysis of SGD. The second term is bounded by Lemma 1; 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤𝑆) ] ≤ 𝔼 [ 𝐹 (𝑤★) − 𝐹 (𝑤★𝑆) ] ≤ 4𝐺𝐷/ √ 𝑛, and the result follows. □ In the subsections that follow we continue to study the generalization gap in the context of common variants to the basic SCO setup. 3.1 SCO with non-convex components When we relax the convexity assumption and only require the losses to be convex in expectation, we can construct a learning problem where SGD exhibits a case of benign overfitting. In contrast to Theorem 1, here we actually drive the SGD iterates towards an ERM solution, thus achieving a low optimization error and an empirical risk that under-estimates the population risk. Theorem 2. Let 𝑛 ∈ ℕ, 𝑛 ≥ 4, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and η ≤ 1/ √ 𝑛. Then there exists a distribution Z over 𝑍 and a 4-Lipschitz loss 𝑓 : 𝑊 × 𝑍 → ℝ where 𝔼𝑧∼Z 𝑓 (𝑤; 𝑧) is convex in 𝑤, such that for any suffix average 𝑤 of SGD initialized at 𝑤1 = 0, with step size η; 𝔼 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω(η2𝑛). The construction and proof of Theorem 2 given in the supplementary follow a methodology similar to that of Theorem 1. Here however, we exploit non convex losses to form an empirical loss landscape where the ERM solution significantly outperforms the population minimizer 𝑤★ (notably, a feat not possible when losses are individually convex, by Corollary 1). Our loss function is defined by 𝑓 (𝑤; 𝑧) B ∑𝑑 𝑖=1 𝑧(𝑖)𝑤(𝑖)2 + φ(𝑤; 𝑧), with each component playing a similar role as before. We work with the distribution 𝑧 ∼ {0, 1}𝑑 where 𝑧(𝑖) = 1 w.p. δ, 𝑧(𝑖) = −1 w.p. δ, and 𝑧(𝑖) = 0 w.p. 1 − 2δ. The intuition is that coordinates accumulating many −1’s offer regions in the 𝐿2 ball where the empirical risk is “too good” compared to the population risk. We tailor the extra dimensions and φ in coordination with the −1 values so that the sub-gradients guide the SGD iterates towards these regions, in exactly the same manner the construction of Theorem 1 drives the iterates to high loss regions. We note that while the statement of Theorem 2 is specialized to step size smaller than 1/ √ 𝑛, it may be extended to any step size using arguments similar to those given in the proof of Theorem 1. 3.2 SCO with strongly convex components Our basic construction extends to the strongly convex case by making only technical modification to Theorem 1. The theorem below concerns the standard step size choice for strongly convex objectives. We provide its proof in the supplementary. Theorem 3. Let 𝑛 ∈ ℕ, 𝑛 ≥ 10, 𝑑 ≥ 24𝑛 log 𝑛, 𝑊 = B2𝑑0 (1), and λ ≥ 1/ √ 𝑛. Then there exists a distribution over instance set 𝑍 and a 4-Lipschitz, λ-strongly convex loss function 𝑓 : 𝑊 × 𝑍 → ℝ (i) the optimization error is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤★𝑆) ] = Ω ( 1 λ √ 𝑛 ) , (ii) the generalization gap is large; 𝔼𝑆∼Z𝑛 [ 𝐹 (𝑤𝑆) − 𝐹 (𝑤𝑆) ] = Ω ( 1 λ √ 𝑛 ) , where 𝑤𝑆 is any suffix average of SGD initialized at 𝑤1 = 0, with step size schedule η𝑡 = 1/λ𝑡. Furthermore, the problem instance where this occurs is precisely the λ regularized version of the example featured in Theorem 1. We note that an immediate implication of the above theorem is that if we seek a generalization gap upper bound for a weakly convex problem by means of regularization (meaning, by running SGD on a regularized problem), we would have to take λ ≥ 1 to guarantee a gap of 𝑂 (1/ √ 𝑛). To see this, note that the generalization gap (of any hypothesis) of the regularized problem is the same as that of the original. On the other hand, taking λ ≥ 1 will of course be detrimental to the population error guarantee. Hence, one cannot circumvent the generalization gap lower bound by regularization without compromising the population error. We conclude this section with a note regarding stability rates of SGD in non-smooth SCO. Implicit in Theorem 1, is that average stability of SGD coincides with the tight uniform stability rate of Θ(η √ 𝑛) established by [7]. This is because Theorem 1 provides the Ω(η √ 𝑛) lower bound on the most general stability notion, which is precisely the generalization gap [32]. We refer the reader to the supplementary for a more elaborate discussion. 4 SGD with vs without replacement In this section, we consider a different algorithm in the context of the basic SCO setup; SGD over examples drawn with-replacement from the training set. This is not to be confused with one-pass SGD discussed in Section 3, which corresponds to without-replacement SGD on the training set, or alternatively with-replacement SGD over the population distribution. Given a training set 𝑆 = {𝑧1, . . . , 𝑧𝑛} ∼ Z𝑛, we define with-replacement projected SGD initialized at 𝑤1 ∈ 𝑊 by 𝑤𝑡+1 ← Π𝑊 (𝑤𝑡 − η̂𝑡 ) , where ̂𝑡 ∈ 𝜕 𝑓 (𝑤𝑡 ; ̂𝑡 ) and ̂𝑡 ∼ Unif (𝑆). Perhaps surprisingly, this version of SGD does not overfit the training data; our theorem below establishes that with proper iterate averaging, the population risk converges at the optimal rate. Theorem 4. Let 𝑊 ⊂ ℝ𝑑 with diameter 𝐷, Z be any distribution over 𝑍 , and 𝑓 : 𝑊 × 𝑍 → ℝ be convex and 𝐺-Lipschitz in the first argument. Let 𝑆 ∼ Z𝑛 be a training set of 𝑛 ∈ ℕ datapoints drawn i.i.d. from Z, and consider running SGD over training examples sampled with-replacement, uniformly and independently from 𝑆. Then, for step size η = 𝐷 𝐺 √ 𝑛 and 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 , the following upper bound holds; 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷√ 𝑛 . Proof. Fix a time-step 𝑡 ∈ [𝑛], and observe that if we don’t condition on 𝑆, we may view the random datapoint ̂𝑡 as a mixture between a fresh i.i.d. sample from the population and a uniformly distributed sample from the previously processed datapoints 𝑆𝑡−1 B {̂1, . . . , ̂𝑡−1}; ̂𝑡 | 𝑆𝑡−1 = { 𝑧 ∼ Z w.p. 1 − 𝑡−1 𝑛 , 𝑧 ∼ Unif (𝑆𝑡−1) w.p. 𝑡−1𝑛 . With this in mind, denote ̂𝑡 (𝑤) B 𝑓 (𝑤; ̂𝑡 ), fix 𝑆𝑡−1 and observe: 𝔼̂𝑡 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) | 𝑆𝑡−1 ] = ( 1 − 𝑡 − 1 𝑛 ) 𝔼𝑧∼Z [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] + 𝑡 − 1 𝑛 1 𝑡 − 1 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤𝑡 ) − ̂𝑖 (𝑤★). Rearranging and taking expectation with respect to 𝑆𝑡−1 we obtain( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝑓 (𝑤𝑡 ; 𝑧) − 𝑓 (𝑤★; 𝑧) ] = 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 𝔼 [ 1 𝑛 𝑡−1∑︁ 𝑖=1 ̂𝑖 (𝑤★) − ̂𝑖 (𝑤𝑡 ) ] ≤ 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 4𝐺𝐷 √ 𝑡 𝑛 , (2) where the inequality follows from Lemma 1. Now, by a direct computation we have ∑𝑛 𝑡=1 ( 1 − 𝑡−1 𝑛 ) = 𝑛+1 2 , which motivates setting 𝑤 B 2 𝑛+1 ∑𝑛 𝑡=1 𝑛−𝑡+1 𝑛 𝑤𝑡 . By convexity of 𝐹, Eq. (2), and the standard regret analysis of gradient descent [e.g., 15] we now have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 ( 1 − 𝑡 − 1 𝑛 ) 𝔼 [ 𝐹 (𝑤𝑡 ) − 𝐹 (𝑤★) ] ≤ 2 𝑛 + 1 𝑛∑︁ 𝑡=1 𝔼 [ ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 2 𝑛 + 1 𝑛∑︁ 𝑡=1 4𝐺𝐷 √ 𝑡 𝑛 ≤ 2 𝑛 𝔼 [ 𝑛∑︁ 𝑡=1 ̂𝑡 (𝑤𝑡 ) − ̂𝑡 (𝑤★) ] + 8𝐺𝐷√ 𝑛 ≤ 2 𝑛 ( 𝐷2 2η + η𝐺 2 2 ) + 8𝐺𝐷√ 𝑛 = 10𝐺𝐷 √ 𝑛 , where the last inequality follows by our choice of η = 𝐷 𝐺 √ 𝑛 . □ Evidently, the averaging scheme dictated by Theorem 4 does little to hurt the empirical risk convergence guarantee, which follows from the standard analysis with little modifications (for completeness we provide a formal statement and proof in the supplementary). Combined with Lemma 1, this immediately implies a generalization gap upper bound for with-replacement SGD. Notably, this shows with-replacement SGD provides for an example of a (natural) algorithm in the SCO learning setup that is not even stable on-average, but nonetheless has a well bounded generalization gap. We refer the reader to the discussion in the supplementary for more details. Corollary 2. For any distribution Z and loss function 𝑓 : 𝑊 × 𝑍 → ℝ convex and Lipschitz in the first argument, running SGD with step size and averaging as specified in Theorem 4 ensures 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝑂 (1/√𝑛). Proof. We have; 𝔼[𝐹 (𝑤) − 𝐹 (𝑤)] ≤ 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤★)] + 𝔼[𝐹 (𝑤★) − 𝐹 (𝑤★𝑆)] + 𝔼[𝐹 (𝑤★𝑆) − 𝐹 (𝑤)] . The first term is upper bounded by convergence of the population risk provided by Theorem 4, the second by Lemma 1, and the third by the standard analysis of SGD (see the supplementary). □ 5 Multi-epoch SGD for empirical risk minimization In this section, we forgo the existence of a population distribution and discuss convergence properties of without-replacement SGD (wor-SGD) for finite sum optimization problems. A relatively long line of work discussed in the introduction studies this problem in the smooth case. The work of [20] noted smoothness is a necessary assumption to obtain rates that are strictly better than the 𝑂 (1/ √ 𝑛𝐾) guaranteed by with-replacement SGD for 𝑛 losses and 𝐾 epochs, due to a lower bound that follows from the deterministic case (e.g., [10]). Here we establish that smoothness is in fact necessary to obtain rates that are not strictly worse than with-replacement SGD. We consider running multiple passes of wor-SGD to solve the finite sum optimization problem given by the objective 𝐹 (𝑤) B 1 𝑛 𝑛∑︁ 𝑡=1 𝑓 (𝑤; 𝑡) (3) where { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 is a set of 𝑛 convex, 𝐺-Lipschitz losses defined over a convex and compact domain 𝑊 ⊆ ℝ𝑑 . Throughout this section we let 𝑤★ B min𝑤∈𝑊 𝐹 (𝑤) denote the minimizer of the objective Eq. (3). In every epoch 𝑘 ∈ [𝐾] we process the losses in the order specified by a permutation π𝑘 : [𝑛] ↔ [𝑛] sampled uniformly at random, either once in the beginning of the algorithm (single-shuffle), or at the onset of every epoch (multi-shuffle). Multi-epoch wor-SGD initialized at 𝑤11 ∈ 𝑊 is specified by the following equations; 𝑤𝑘𝑡+1 ← Π𝑊 (𝑤 𝑘 𝑡 − η𝑔𝑘𝑡 ), where 𝑔𝑘𝑡 ∈ 𝜕 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) 𝑤𝑘+11 B 𝑤 𝑘 𝑛+1, where we denote 𝑓 𝑘𝑡 (𝑤) B 𝑓 (𝑤;π𝑘 (𝑡)). A near-immediate implication of Theorem 1 is that there exists a set of convex losses on which a single epoch of wor-SGD cannot converge at a rate faster than 1/𝑛1/4. Theorem 5 presented below extends our basic construction from Theorem 1 to accommodate multiple epochs. The main challenge here is in devising a mechanism that will allow fresh bad gradient steps to take place on every new epoch. Theorem 5. Let 𝑛, 𝐾 ∈ ℕ, 𝐾 ≥ 4, 𝑛 ≥ 4, 𝑐 B 4/(21/𝐾 − 1), 𝑑 ≥ 26𝑛 log(𝑐𝑛𝐾) , and 𝑊 = B𝑑′0 (1) where 𝑑 ′ = (𝑛𝐾 + 1)𝑑. Then there exists a set of 𝑛 convex, 4-Lipschitz losses such that after 𝐾 epochs of either multi-shuffle or single-shuffle SGD initialized at 𝑤11 = 0 with step size η ≤ 1/ √ 2𝑛𝐾 , it holds that 𝔼 [𝐹 (𝑤) − 𝐹 (𝑤∗)] = Ω ( min { 1, η √︂ 𝑛 𝐽 + 1 η𝑛𝐾 + η }) , where 𝑤 is any suffix average of the last 𝐽 epochs. In particular, we obtain a bound of Ω ( 𝑛−1/4𝐾−3/4 ) for any suffix average and any choice of η. The proof of Theorem 5 is provided in the supplementary. The construction in the proof takes the idea that the training set can be encoded in the SGD iterate to the extreme. The loss function and gradient oracle are designed in such a way so as to record the training examples in their full form and order into the iterate. We then exploit this encoded information with an “adversarial” gradient oracle that returns the bad sub-gradients on each gradient step in every new epoch. Next, we complement Theorem 5 with an upper bound that builds on stability arguments similar to those of the smooth case [20]. Importantly though, lack of smoothness means worse stability rates and necessitates extra care in the technical arguments. Below, we prove the multi-shuffle case, and defer the full details for the single-shuffle case to the supplementary. Theorem 6. Let 𝑆 = { 𝑓 (𝑤; 𝑡)}𝑛𝑡=1 be a set of 𝑛 convex, 𝐺-Lipschitz losses over a convex and compact domain𝑊 ⊆ ℝ𝑑 of diameter 𝐷, and consider running 𝐾 ≥ 1 epochs of wor-SGD over 𝑆. Then, we have the following guarantees: (i) For multi-shuffle, with step-size η = 𝐷/(𝐺𝑛3/4𝐾1/2), we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 . (ii) For single-shuffle, with step-size η = 𝐷/(2𝐺𝑛3/4𝐾3/4) and assuming 𝐾 ≥ 𝑛, we have 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 10𝐺𝐷 𝑛1/4𝐾1/4 . In both of the above bounds, 𝑤 = 1 𝑛𝐾 ∑ 𝑘∈[𝐾 ],𝑡 ∈[𝑛] 𝑤 𝑘 𝑡 , and the expectation is over the random permutations of losses. Proof (multi-shuffle case). Observe; 𝐹 (𝑤) − 𝐹 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝐹 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) = 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤★) ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) + 𝐷2 2η𝑛𝐾 + η𝐺 2 2 , with the last inequality following from the standard 𝑛𝐾 round regret bound for gradient descent [see e.g., 15]. To bound the other term, using Lemma 10, we relate the difference between the without-replacement loss distribution and the full batch objective to the uniform stability rate of SGD, which may then be bounded by applying Lemma 11: 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 )) ] = 𝔼π1 ,...,π𝑘−1𝔼π𝑘 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) | 𝑤𝑘1 ] ≤ 𝔼π1 ,...,π𝑘−1 [ 𝐺ϵSGDstab (𝑡 − 1) ] = 𝐺ϵSGDstab (𝑡 − 1) ≤ 2η𝐺2 √ 𝑡. Concluding, we have that 𝔼 [ 𝐹 (𝑤) − 𝐹 (𝑤★) ] ≤ 1 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 𝔼 [ 𝐹 (𝑤𝑘𝑡 ) − 𝑓 𝑘𝑡 (𝑤𝑘𝑡 ) ] + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2 𝑛𝐾 𝐾∑︁ 𝑘=1 𝑛∑︁ 𝑡=1 η𝐺2 √ 𝑡 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 2η𝐺2 √ 𝑛 + 𝐷 2 2η𝑛𝐾 + η𝐺 2 2 ≤ 3𝐺𝐷 𝑛1/4𝐾1/2 , where the last inequality follows from our choice of η = 𝐷/(𝐺𝑛3/4𝐾1/2). □ Acknowledgements and funding disclosure This work was supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 882396), by the Israel Science Foundation (grants number 993/17, 2549/19, 2188/20), by the Len Blavatnik and the Blavatnik Family foundation, by the Yandex Initiative in Machine Learning at Tel Aviv University, by a grant from the Tel Aviv University Center for AI and Data Science (TAD), and by an unrestricted gift from Google. Any opinions, findings, and conclusions or recommendations expressed in this work are those of the author(s) and do not necessarily reflect the views of Google.
1. What is the focus and contribution of the paper regarding SGD's behavior in stochastic convex optimization? 2. What are the strengths of the proposed approach, particularly in providing new intuitions for analyzing test errors? 3. What are the weaknesses of the paper, especially regarding the artificial construction of the failure example? 4. Do you have any concerns about the analysis relying on the averaged solution rather than the last iteration solution? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the empirical risk and population risk (together with generalization) for different variants of SGD in the context of stochastic convex optimization (they also extend the analysis to other settings, i.e., convex in expectation, strongly convex). Their main contribution is to construct an example that (one-pass) SGD exhibits both empirical risk and generalization gap of Ω ( 1 ) . Moreover, they show that this phenomenon does not exist if we use with-replacement SGD. Last, they derive upper and lower bounds for without-replacement SGD in the multi-epoch regime. Strengths And Weaknesses Strengths: This paper studies the behavior of SGD in SCO, providing new intuition of the analysis of test error (population risk). By their construction, even in the convex optimization, there exists an instance that provably minimizes population risk, while the empirical risk is of constant level, resulting in a constant generalization gap. This observation questions the rational optimization-generalization decomposition framework (or ERM framework). The main reason, is that SGD does not minimize the empirical loss, which differs a lot in the smooth setting. In a word, this observation might lead to a new type of analysis of statistical learning, which can have a broad impact on the community. The other observations, together with their proof techniques are also interesting. For example, the comparison between with and without replacement might provide some intuition on the advantage of multi-pass SGD in terms of its implicit regularization. The paper is well-structured and provides good intuition for the proof technique. Weakness: The construction of the failure seems artificial and might not be general enough. They rely on a special structure that (to my best knowledge) no realistic learning problem has a similar structure. It would be great if the author can provide some connection between their constructed example and some realistic examples. The analysis relies on the averaged solution instead of the last iteration solution, which prevents the general impact of this work. It would be great if the author can at least comment on or conjecture the result for the last iteration case. Questions Please see the weaknesses part. Limitations Please see the weaknesses part.
NIPS
Title Sparse Approximate Conic Hulls Abstract We consider the problem of computing a restricted nonnegative matrix factorization (NMF) of an m × n matrix X . Specifically, we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and C ∈ Rk×n ≥0 . Equivalently, given the matrix X , consider the problem of finding a small subset, S, of the columns of X such that the conic hull of S ε-approximates the conic hull of the columns of X , i.e., the distance of every column of X to the conic hull of the columns of S should be at most an ε-fraction of the angular diameter of X . If k is the size of the smallest ε-approximation, then we produce an O(k/ε) sized O(ε)-approximation, yielding the first provable, polynomial time ε-approximation for this class of NMF problems, where also desirably the approximation is independent of n and m. Furthermore, we prove an approximate conic Carathéodory theorem, a general sparsity result, that shows that any column of X can be ε-approximated with an O(1/ε) sparse combination from S. Our results are facilitated by a reduction to the problem of approximating convex hulls, and we prove that both the convex and conic hull variants are d-SUM-hard, resolving an open problem. Finally, we provide experimental results for the convex and conic algorithms on a variety of feature selection tasks. 1 Introduction Matrix factorizations of all sorts (SVD, NMF, CU, etc.) are ubiquitous in machine learning and computer science. In general, given an m× n matrix X , the goal is to find a decomposition into a product of two matrices B ∈ Rm×k and C ∈ Rk×n such that the Frobenius norm between X and BC is minimized. If no further restrictions are placed on the matrices B and C, this problem can be solved optimally by computing the singular value decomposition. However, imposing restrictions on B and C can lead to factorizations which are more desirable for reasons such as interpretability and sparsity. One of the most common restrictions is non-negative matrix factorization (NMF), requiring B and C to consist only of non-negative entries (see [Berry et al., 2007] for a survey). Practically, NMF has seen widespread usage as it often produces nice factorizations that are frequently sparse. Typically NMF is accomplished by applying local search heuristics, and while NMF can be solved exactly in certain cases (see [Arora et al., 2016]), in general NMF is not only NP-hard [Vavasis, 2009] but also d-SUM-hard [Arora et al., 2016]. One drawback of factorizations such as SVD or NMF is that they can represent the data using a basis that may have no clear relation to the data. CU decompositions [Mahoney and Drineas, 2009] address this by requiring the basis to consist of input points. While it appears that the hardness of this problem has not been resolved, approximate solutions are known. Most notable is the additive approximation of Frieze et al. [2004], though more recently there have been advances on the multiplicative front [Drineas et al., 2008, Çivril and Magdon-Ismail, 2012, Guruswami and Sinop, 2012]. Similar restrictions have also been considered for NMF. Donoho and Stodden [2003] introduced a separability 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. assumption for NMF, and Arora et al. [2016] showed that a NMF can be computed in polynomial time under this assumption. Various other methods have since been proposed for NMF under the separability (or near separability) assumption [Recht et al., 2012, Kumar et al., 2013, Benson et al., 2014, Gillis and Vavasis, 2014, Zhou et al., 2014, Kumar and Sindhwani, 2015]. The separability assumption requires that there exists a subset S of the columns of X such that X = XSC for some nonnegative matrix C. This assumption can be restrictive in practice, e.g., when an exact subset does not exist but a close approximate subset does, i.e., X ≈ XSC. To our knowledge, no exact or approximate polynomial time algorithms have been proposed for the general problem of computing a NMF under only the restriction that the columns must be selected from those of X . In this work, we fill this gap by arguing that a simple greedy algorithm can be used to provide a polynomial time ε-approximation algorithm for NMF under the column subset restriction. Note that the separability assumption is not required here: our theoretical analysis bounds the error of our selected columns versus the best possible columns that could have been chosen. The algorithm is based off of recent work on fast algorithms for approximately computing the convex hull of a set of points [Blum et al., 2016]. As in previous approaches [Donoho and Stodden, 2003, Kumar et al., 2013], we formulate restricted NMF geometrically as finding a subset, S, of the columns of the matrix X whose conic hull, the set of all nonnegative combinations of columns of S, well-approximates the conic hull of X . Using gnomonic projection, we reduce the conic hull problem to a convex hull problem and then apply the greedy strategy of Blum et al. [2016] to compute the convex hull of the projected points. Given a set of points P in Rm, the convex hull of S ⊆ P , denoted Convex(S), is said to ε-approximate Convex(P ) if the Hausdorff distance between Convex(S) and Convex(P ) is at most ε · diameter(P ). For a fixed ε > 0, suppose the minimum sized subset of P whose convex hull ε-approximates the convex hull of P has size k, then Blum et al. [2016] show that a simple greedy algorithm gives an ε′ = O(ε1/3) approximation using at most k′ = O(k/ε2/3) points of P , with an efficient O(nc(m+ c/ε2 + c2)) running time, where c = O(kopt/ε2/3). By careful analysis, we show that our reduction achieves the same guarantees for the conic problem. (Note Blum et al. [2016] present other trade-offs between k′ and ε′, which we argue carry to the conic case as well). Significantly, k′ and ε′ are independent of n and m, making this algorithm desirable for large high dimensional point sets. Note that our bounds on the approximation quality and the number of points do not explicitly depend on the dimension as they are relative to the size of the optimal solution, which itself may or may not depend on dimension. Like the X-RAY algorithm [Kumar et al., 2013], our algorithm is easy to parallelize, allowing it to be applied to large-scale problems. In addition to the above ε-approximation algorithm, we also present two additional theoretical results of independent interest. The first theoretical contribution provides justification for empirical observations about the sparsity of NMF [Lee and Seung, 1999, Ding et al., 2010]. Due to the high dimensional nature of many data sets, there is significant interest in sparse representations requiring far fewer points than the dimension. Our theoretical justification for sparsity is based on Carathéodory’s theorem: any point q in the convex hull of P can be expressed as a convex combination of at most m+ 1 points from P . This is tight in the worst case for exact representation, however the approximate Carathéodory theorem [Clarkson, 2010, Barman, 2015] states there is a point q′ which is a convex combination of O(1/ε2) points of P (i.e., independent of n and m) such that ||q − q′|| ≤ ε · diameter(P ). This result has a long history with significant implications in machine learning, e.g., relating to the analysis of the perceptron algorithm [Novikoff, 1962], though the clean geometric statement of this theorem appears to be not well known outside the geometry community. Moreover, this approximation is easily computable with a greedy algorithm (e.g., [Blum et al., 2016]) similar to the Frank-Wolfe algorithm. The analogous statement for the linear case does not hold, so it is not immediately obvious whether such an approximate Carathéodory theorem should hold for the conic case, a question which we answer in the affirmative. As a second theoretical contribution, we address the question of whether or not the convex/conic hull problems are actually hard, i.e., whether approximations are actually necessary. We answer this question for both problems in the affirmative, resolving an open question of Blum et al. [2016], by showing both that the conic and convex problems are d-SUM-hard. Finally, we evaluate the performance of the greedy algorithms for computing the convex and conic hulls on a variety of feature selection tasks against existing methods. We observe that, both the conic and convex algorithms perform well for a variety of feature selection tasks, though, somewhat surprisingly, the convex hull algorithm, for which previously no experimental results had been produced, yields consistently superior results on text datasets. We use our theoretical results to provide intuition for these empirical observations. 2 Preliminaries Let P be a point set in Rm. For any p ∈ P , we interchangeably use the terms vector and point, depending on whether or not we wish to emphasize the direction from the origin. Let ray(p) denote the unbounded ray passing through p, whose base lies at the origin. Let unit(p) denote the unit vector in the direction of p, or equivalently unit(p) is the intersection of ray(p) with the unit hypersphere S(m−1). For any subset X = {x1, . . . , xk} ⊆ P , ray(X) = {ray(x1), . . . , ray(xk)} and unit(X) = {unit(x1), . . . , unit(xk)}. Given points p, q ∈ P , let d(p, q) = ||p−q|| denote their Euclidean distance, and let 〈p, q〉 denote their dot product. Let angle(ray(p), ray(q)) = angle(p, q) = cos−1(〈unit(p), unit(q)〉) denote the angle between the rays ray(p) and ray(q), or equivalently between vectors p and q. For two sets, P,Q ⊆ Rm, we write d(P,Q) = minp∈P,q∈Q d(p, q) and for a single point q we write d(q, P ) = d({q}, P ), and the same definitions apply to angle(). For any subset X = {x1, . . . , xk} ⊆ P , let Convex(X) = { ∑ i αixi | αi ≥ 0, ∑ i αi = 1} denote the convex hull of X . Similarly, let Conic(X) = {∑i αixi | αi ≥ 0} denote the conic hull of X and DualCone(X) = {z ∈ X | 〈x, z〉 ≥ 0 ∀x ∈ X} the dual cone. For any point q ∈ Rm, the projection of q onto Convex(X) is the closest point to q in Convex(X), proj(q) = proj(q,Convex(X)) = arg minx∈Convex(X) d(q, x). Similarly the angular projection of q onto Conic(X) is the angularly closest point to q in Conic(X), aproj(q) = aproj(q,Conic(X)) = arg minx∈Conic(X) angle(q, x). Note that angular projection defines an entire ray of Conic(X), rather than a single point, which without loss of generality we choose the point on the ray minimizing the Euclidean distance to q. In fact, abusing notation, we sometimes equivalently view Conic(X) as a set of rays rather than points, in which case aproj(ray(q)) = aproj(q) is the entire ray. For X ⊂ Rm, let ∆ = ∆X = maxp,q∈X d(p, q) denote the diameter of X . The angular diameter of X is φ = φX = maxp,q∈X angle(p, q). Similarly φX(q) = maxp∈X angle(p, q) denotes the angular radius of the minimum radius cone centered around the ray through q and containing all of P . Definition 2.1. Consider a subsetX of a point set P ⊂ Rm. X is an ε-approximation to Convex(P ) if dconvex(X,P ) = maxp∈Convex(P ) d(p,Convex(X)) ≤ ε∆. Note dconvex(X,P ) is the Hausdorff distance between Convex(X) and Convex(P ). Similarly X is an ε-approximation to Conic(P ) if dconic(X,P ) = maxp∈Conic(P ) angle(p,Conic(X)) ≤ εφP . Note that the definition of ε-approximation for Conic(P ) uses angular rather than Euclidean distance in order to be defined for rays, i.e., scaling a point outside the conic hull changes its Euclidean distance but its angular distance is unchanged since its ray stays the same. Thus we find considering angles better captures what it means to approximate the conic hull than the distance based Frobenius norm which is often used to evaluate the quality of approximation for NMF. As we are concerned only with angles, without loss of generality we often will assume that all points in the input set P have been scaled to have unit length, i.e., P = unit(P ). In our theoretical results, we will always assume that φP < π/2. Note that if P lies in the non-negative orthant, then for any strictly positive q, φP (q) < π/2. In the case that the P is not strictly inside the positive orthant, the points can be uniformly translated a small amount to ensure that φP < π/2. 3 A Simple Greedy Algorithm Let P be a finite point set in Rm (with unit lengths). Call a point p ∈ P extreme if it lies on the boundary of the conic hull (resp. convex hull). Observe that for any X ⊆ P , containing all the extreme points, it holds that Conic(X) = Conic(P ) (resp. Convex(X) = Convex(P )). Consider the simple greedy algorithm which builds a subset of points S, by iteratively adding to S the point angularly furthest from the conic hull of the current point set S (for the convex hull take the furthest point in distance). One can argue in each round this algorithm selects an extreme point, and thus can be used to find a subset of points whose hull captures that of P . Note if the hull is not degenerate, i.e., no point on the boundary is expressible as a combination of other points on the boundary, then this produces the minimum sized subset capturing P . Otherwise, one can solve a recursive subproblem as discussed by Kumar et al. [2013] to exactly recover S. Here instead we consider finding a small subset of points (potentially much smaller than the number of extreme points) to approximate the hull. The question is then whether this greedy approach still yields a reasonable solution, which is not clear as there are simple examples showing the best approximate subset includes non-extreme points. Moreover, arguing about the conic approximation directly is challenging as it involves angles and hence spherical (rather than planar) geometry. For the convex case, Blum et al. [2016] argued that this greedy strategy does yield a good approximation. Thus we seek a way to reduce our conic problem to an instance of the convex problem, without introducing too much error in the process, which brings us to the gnomonic projection. Let hplane(q) be the hyperplane defined by the equation 〈(q − x), q〉 = 0 where q ∈ Rm is a unit length normal vector. The gnomonic projection of P onto hplane(q), is defined as gpq(P ) = {ray(P )∩ hplane(q)} (see Figure 3.1). Note that gpq(q) = q. For any point x in hplane(q), the inverse gnomonic projection is pgq(x) = ray(x)∩ S(m−1). Similar to other work [Kumar et al., 2013], we allow projections onto any hyperplane tangent to the unit hypersphere with normal q in the strictly positive orthant. A key property of the gnomonic projection, is that the problem of finding the extreme points of the convex hull of the projected points is equivalent to finding the extreme points of the conic hull of P . (Additional properties of the gnomonic projection are discussed in the full version.) Thus the strategy to approximate the conic hull should now be clear. Let P ′ = gpq(P ). We apply the greedy strategy of Blum et al. [2016] to P ′ to build a set of extreme points S, by iteratively adding to S the point furthest from the convex hull of the current point set S. This procedure is shown in Algorithm 1. We show that Algorithm 1 can be used to produce an ε-approximation to the restricted NMF problem. Formally, for ε > 0, let opt(P, ε) denote any minimum cardinality subset X ⊆ P which ε-approximates Conic(P ), and let kopt = |opt(P, ε)|. We consider the following problem. Problem 3.1. Given a set P of n points in Rm such that φP ≤ π/2− γ, for a constant γ > 0, and a value ε > 0, compute opt(P, ε). Alternatively one can fix k rather than ε, defining opt(P, k) = arg minX⊆P,|X|=k dconic(X,P ) and εopt = dconic(opt(P, k), P ). Our approach works for either variant, though here we focus on the version in Problem 3.1. Note the bounded angle assumption applies to any collection of points in the strictly positive orthant (a small translation can be used to ensure this for any nonnegative data set). In this section we argue Algorithm 1 produces an (α, β)-approximation to an instance (P, ε) of Problem 3.1, that is a subset X ⊆ P such that dconic(X,P ) ≤ α and |X| ≤ β ·kopt = β · |opt(P, ε)|. For ε > 0, similarly define optconvex(P, ε) to be any minimum cardinality subset X ⊆ P which ε-approximates Convex(P ). Blum et al. [2016] gave (α, β)-approximation for the following. Problem 3.2. Given a set P of n points in Rm, and a value ε > 0, compute optconvex(P, ε). Note the proofs of correctness and approximation quality from Blum et al. [2016] for Problem 3.2 do not immediately imply the same results for using Algorithm 1 for Problem 3.1. To see this, consider any points u, v on S(m−1). Note the angle between u and v is the same as their geodesic distance on S(m−1). Intuitively, we want to claim the geodesic distance between u and v is roughly the same as the Euclidean distance between gpq(u) and gpq(v). While this is true for points near q, as we move away from q the correspondence breaks down (and is unbounded as you approach π/2). This non-uniform distortion requires care, and thus the proofs had to be moved to the full version. Finally, observe that Algorithm 1, requires being able to compute the point furthest from the convex hull. To do so we use the (convex) approximate Carathéodory, which is both theoretically and practically very efficient, and produces provably sparse solutions. As a stand alone result, we first prove the conic analog of the approximate Carathéodory theorem. This result is of independent interest since it can be used to sparsify the returned solution from Algorithm 1, or any other algorithm. 3.1 Sparsity and the Approximate Conic Carathéodory Theorem Our first result is a conic approximate Carathéodory theorem. That is, given a point set P ⊆ Rm and a query point q, then the angularly closest point to q in Conic(P ) can be approximately expressed as q x x′hplane(q) a sparse combination of point from P . More precisely, one can compute a point t which is a conic combination of O(1/ε2) points from P such that angle(q, t) ≤ angle(q,Conic(P )) + εφP . The significance of this result is as follows. Recall that we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and the entries of C are non-negative. Ideally each point in X is expressed as a sparse combination from the basis B, that is each column of C has very few non-zero entries. So suppose we are given any factorization BC, but C is dense. Then no problem, just throw out C, and use our Carathéodory theorem to compute a new matrix C ′ with sparse columns. Namely treat each column of X as the query q and run the theorem for the point set P = B, and then the non-zero entries of corresponding column of C ′ are just the selected combination from B. Not only does this mean we can sparsify any solution to our NMF problem (including those obtained by other methods), but it also means conceptually that rather than finding a good pair BC, one only needs to focus on finding the subset B, as is done in Algorithm 1. Note that Algorithm 1 allows non-negative inputs in P because φP < π/2 ensures P can be rotated into the positive orthant. While it appears the conic approximate Carathéodory theorem had not previously been stated, the convex version has a long history (e.g., implied by [Novikoff, 1962]). The algorithm to compute this sparse convex approximation is again a simple and fast greedy algorithm, which roughly speaking is a simplification of the Frank-Wolfe algorithm for this particular problem. Specifically, to find the projection of q onto Convex(P ), start with any point t0 ∈ Convex(P ). In the ith round, find the point pi ∈ P most extreme in the direction of q from ti−1 (i.e., maximizing 〈q − ti−1, pi〉) and set ti to be the closest point to q on the segment ti−1pi (thus simplifying Frank Wolfe, as we ignore step size issues). The standard analysis of this algorithm (e.g., [Blum et al., 2016]) gives the following. Theorem 3.3 (Convex Carathéodory). For a point set P ⊆ Rm, ε > 0, and q ∈ Rm, one can compute, in O ( |P |m/ε2 ) time, a point t ∈ Convex(P ), such that d(q, t) ≤ d(q,Convex(P )) + ε∆, where ∆ = ∆P . Furthermore, t is a convex combination of O(1/ε2) points of P . Again by exploiting properties of the gnomonic projection we are able to prove a conic analog of the above theorem. Note for P ⊂ Rm, P is contained in the linear span of at most m points from P , and similarly the exact Carathéodory theorem states any point q ∈ Convex(P ) is expressible as a convex combination of at most m+ 1 points from P . As the conic hull lies between the linear case (with all combinations) and the convex case (with non-negative combinations summing to one), it is not surprising an exact conic Carathéodory theorem holds. However, the linear analog of the approximate convex Caratheodory theorem does not hold, and so the following conic result is not a priori obvious. Theorem 3.4. Let P ⊂ Rm be a point set, let q be such that φP (q) < π/2− γ for some constant γ > 0, and let ε > 0 be a parameter. Then one can find, in O(|P |m/ε2) time, a point t ∈ Conic(P ) such that angle(q, t) ≤ angle(q,Conic(P ))+εφP (q). Moreover, t is a conic combination ofO(1/ε2) points from P . Due to space constraints, the detailed proof of Theorem 3.4 appears in the full version. In the proof, the dependence on γ is made clear but we make a remark about it here. If ε is kept fixed, γ shows up in the running time roughly by a factor of tan2(π/2− γ). Alternatively, if the running time is fixed, the approximation error will roughly depend on the factor 1/ tan(π/2− γ). We now give a simple example of a high dimensional point set which shows our bounded angle assumption is required for the conic Carathéodory theorem to hold. Let P consist of the standard basis vectors in Rm, let q be the all ones vector, and let ε be a parameter. Let X be a subset of P of size k, and consider aproj(q) = aproj(q,X). As P consists of basis vectors, each of which have all but one entry set to zero, aproj(q) will have at most k non-zero entries. By the symmetry of q it is also clear that all non-zero entries in aproj(q) should have the same value. Without loss of generality assume that this value is 1, and hence the magnitude of aproj(q) is √ k. Thus for aproj(q) to be an ε-approximation to q, angle(aproj(q), q) = cos−1( k√ k √ m ) = cos−1( √ k/m) < ε. Hence for a fixed ε, the number of points required to ε-approximate q depends on m, while the conic Carathéodory theorem should be independent of m. 3.2 Approximating the Conic Hull We now prove that Algorithm 1 yields an approximation to the conic hull of a given point set and hence an approximation to the nonnegative matrix factorization problem. As discussed above, previously Blum et al. [2016] provided the following (α, β)-approximation for Problem 3.2. Theorem 3.5 ([Blum et al., 2016]). For a set P of n points in Rm, and ε > 0, the greedy strategy, which iteratively adds the point furthest from the current convex hull, gives a ((8ε1/3 + ε)∆, O(1/ε2/3))-approximation to Problem 3.2, and has running time O(nc(m + c/ε2 + c2)) time, where c = O(kopt/ε2/3). Our second result, is a conic analog of the above theorem. Theorem 3.6. Given a set P of n points in Rm such that φP ≤ π2 − γ for a constant γ > 0, and a value ε > 0, Algorithm 1 gives an ((8ε1/3 + ε)φP , O(1/ε2/3))-approximation to Problem 3.1, and has running time O(nc(m+ c/ε2 + c2)), where c = O(kopt/ε2/3). Bounding the approximation error requires carefully handling the distortion due to the gnomonic project, and the details are presented in the full version. Additionally, Blum et al. [2016] provide other (α, β)-approximations, for different values of α and β, and in the full version these other results are also shown to hold for the conic case. 4 Hardness of the Convex and Conic Problems This section gives a reduction from d-SUM to the convex approximation of Problem 3.2, implying it is d-SUM-hard. In the full version a similar setup is used to argue the conic approximation of Problem 3.1 is d-SUM-hard. Actually if Problem 3.1 allowed instances where φP = π/2 the reduction would be virtually the same. However, arguing that the problem remains hard under our requirement that φP ≤ π/2− γ, is non-trivial and some of the calculations become challenging and lengthy. The reductions to both problems are partly inspired by Arora et al. [2016]. However, here, we use the somewhat non-standard version of d-SUM where repetitions are allowed as described below. Problem 4.1 (d-SUM). In the d-SUM problem we are given a set S = {s1, s2, · · · , sN} of N values, each in the interval [0, 1], and the goal is to determine if there is a set of d numbers (not necessarily distinct) whose sum is exactly d/2. It was shown by Patrascu and Williams [2010] that if d-SUM can be solved in No(d) time then 3-SAT has a sub-exponential time algorithm, i.e., that the Exponential Time Hypothesis is false. Theorem 4.2 (d-SUM-hard). Let d < N0.99, δ < 1. If d-SUM on N numbers of O(d log(N)) bits can be solved in O(Nδd) time, then 3-SAT on n variables can be solved in 2o(n) time. We will prove the following decision version of Problem 3.2 is d-SUM-hard. Note in this section the dimension will be denoted by d rather than m, as this is standard for d-SUM reductions. Problem 4.3. Given a set P of n points in Rd, a value ε > 0, and an integer k, is there a subset X ⊆ P of k points such that dconvex(X,P ) ≤ ε∆, where ∆ is the diameter of P . Given an instance of d-SUM with N values S = {s1, s2, · · · , sN} we construct an instance of Problem 4.3 where P ⊂ Rd+2, k = d, and ε = 1/3 (or any sufficiently small value). The idea is to create d clusters each containing N points corresponding to a choice of one of the si values. The clusters are positioned such that exactly one point from each cluster must be chosen. The d + 2 coordinates are labeled ai for i ∈ [d], w, and v. Together, a1, · · · , ad determine the cluster. The w dimension is used to compute the sum of the chosen si values. The v dimension is used as a threshold to determine whether d-SUM is a yes or no instance to Problem 4.3. Let w(pj) denote the w value of an arbitrary point pj . We assume d ≥ 2 as d-SUM is trivial for d = 1. Let e1, e2, · · · , ed ∈ Rd be the standard basis in Rd, e1 = (1, · · · , 0), e2 = (0, 1, · · · , 0), . . . , and ed = (0, · · · , 1). Together they form the unit d-simplex, and they define the d clusters in the construction. Finally, let ∆∗ = √ 2 + (εsmax − εsmin)2 be a constant where smax and smin are, respectively, the maximum and minimum values in S. Definition 4.4. The set of points P ⊂ Rd+2 are the following pij points: For each i ∈ [d], j ∈ [N ], set (a1, · · · , ad) = ei, w = εsj and v = 0 q point: For each i ∈ [d], ai = 1/d, w = ε/2, v = 0 q′ point: For each i ∈ [d], ai = 1/d and w = ε/2, v = ε∆∗ Lemma 4.5 (Proof in full version). The diameter of P , ∆P , is equal to ∆∗. We prove completeness and soundness of the reduction. Below P i = ∪j pij denotes the ith cluster. Observation 4.6. If maxp∈P d(p,Convex(X)) ≤ ε∆, then dconvex(X,P ) ≤ ε∆: For point sets A and B = {b1, . . . , bm}, if we fix a ∈ Convex(A), then for any b ∈ Convex(B) we have ||a− b|| = ||a−∑i αibi|| = ||∑i αi(a− bi)|| ≤∑i αi||a− bi|| ≤ maxi ||a− bi||. Lemma 4.7 (Completeness). If there is a subset {sk1 , sk2 , · · · , skd} of d values (not necessarily distinct) such that ∑ i∈[d] ski = d/2, then the above described instance of Problem 4.3 is a true instance, i.e. there is a d sized subset X ⊆ P with dconvex(X,P ) ≤ ε∆. Proof: For each value ski consider the point xi = (ei, ε · ski , 0), which by Definition 4.4 is a point in P . Let X = {x1, . . . , xd}. We now prove maxp∈P d(p,Convex(X)) ≤ ε∆, which by Observation 4.6 implies that dconvex(X,P ) ≤ ε∆. First observe that for any pij in P , d(p i j , xi) = √ (w(pij)− w(xi))2 ≤ |εsj − εski | ≤ ε∆. The only other points in P are q and q′. Note that d(q, q′) = ε∆∗ = ε∆ from Lemma 4.5. Thus if we can prove that q ∈ Convex(X) then we will have shown maxp∈P d(p,Convex(X)) ≤ ε∆. Specifically, we prove that the convex combination x = 1d ∑d i xi is the point q. As X contains exactly one point from each set P i, and in each such set all points have ai = 1 and all other aj = 0, it holds that x has 1/d for all the a coordinates. All points in X have v = 0 and so this holds for x as well. Thus we only need to verify that w(x) = w(q) = ε/2, for which we have w(x) = 1d ∑ i w(xi) = 1 d ∑ i εski = 1 d (εd/2) = ε/2. Proving soundness requires some helper lemmas. Note that in the above proof we constructed a solution to Problem 4.3 that selected exactly one point from each cluster P i. We now prove that this is a required property. Lemma 4.8 (Proof in full version). Let P ⊂ Rd+2 be as defined above, and let X ⊆ P be a subset of size d. If dconvex(X,P ) ≤ ε∆, then for all i, X contains exactly one point from P i. Lemma 4.9 (Proof in full version). If dconvex(X,P ) ≤ ε∆, then q ∈ Convex(X) and moreover q = 1d ∑ xi∈X xi. Lemma 4.10 (Soundness). Let P be an instance of Problem 4.3 generated from a d-SUM instance S, as described in Definition 4.4. If there is a subset X ⊆ P of size d such that dconvex(X,P ) ≤ ε∆, then there is a choice of d values from S that sum to exactly d/2. Proof: From Lemma 4.8 we know that X consist of exactly one point from each cluster P i. Thus for each xi ∈ X , w(xi) = εski for some ski ∈ S. By Lemma 4.9, q = 1d ∑ i xi, which implies w(q) = 1d ∑ i w(xi). By Definition 4.4w(q) = ε/2, which implies ε/2 = 1 d ∑ i w(xi) = 1 d ∑ i εski . Thus we have a set {sk1 , . . . , skd} of d values from S such that ∑ i ski = d/2. Lemma 4.7 and Lemma 4.10 immediately imply the following. Theorem 4.11. For point sets in Rd+2, Problem 4.3 is d-SUM-hard. 5 Experimental Results We report an experimental comparison of the proposed greedy algorithm for conic hulls, the greedy algorithm for convex hulls (the conic hull algorithm without the projection step) [Blum et al., 2016], the X-RAY (max) algorithm [Kumar et al., 2013], a modified version of X-RAY, dubbed mutant X-RAY, which simply selects the point furthest away from the current cone (i.e., with the largest residual), and a γ-shifted version of the conic hull algorithm described below. Other methods such as Hottopixx [Recht et al., 2012, Gillis and Luce, 2014] and SPA [Gillis and Vavasis, 2014] were not included due to their similar performance to the above methods. For our experiments, we considered the performance of each of the methods when used to select features for a variety of SVM classification tasks on various image, text, and speech data sets including several from the Arizona State University feature selection repository [Li et al., 2016] as well as the UCI Reuters dataset and the BBC News dataset [Greene and Cunningham, 2006]. The Reuters and BBC text datasets are represented using the TF-IDF representation. For the Reuters dataset, only the ten most frequent topics were used for classification. In all datasets, columns (corresponding to features) that were identically equal to zero were removed from the data matrix. For each problem, the data is divided using a 30/70 train/test split, the features are selected by the indicated method, and then an SVM classifier is trained using only the selected features. For the conic and convex hull methods, is set to 0.1. The accuracy (percent of correctly classified instances) is plotted versus the number of selected features for each method in Figure 4.1. Additional experimental results can be found in the full version. Generally speaking, the convex, mutant X-RAY, and shifted conic algorithms seem to consistently perform the best on the tasks. The difference in performance between convex and conic is most striking on the two text data sets Reuters and BBC. In the case of BBC and Reuters, this is likely due to the fact that many of the columns of the TF-IDF matrix are orthogonal. We note that the quality of both X-RAY and conic is improved if thresholding is used when constructing the feature matrix, but they still seem to under perform the convex method for text datasets. The text datasets are also interesting as not only do they violate the explicit assumption in our theorems that the angular diameter of the conic hull be strictly less than π/2, but that there are many such mutually orthogonal columns of the document-feature matrix. This observation motivates the γ-shifted version of the conic hull algorithm that simply takes the input matrix X and adds γ to all of the entries (essentially translating the data along the all ones vector) and then applies the conic hull algorithm. Let 1a,b denote the a × b matrix of ones. After a nonnegative shift, the angular assumption is satisfied, and the restricted NMF problem is that of approximating (X + γ1m,n) as (B + γ1m,k)C, where the columns of B are again chosen from those of X . Under the Frobenus norm ||(X + γ1m,n)− (B + γ1m,k)C||22 = ∑ i,j(Xij −Bi,:C:,j + γ(1− ||C:,j ||1))2. As C must be a nonnegative matrix, the shifted conic case acts like the original conic case plus a penalty that encourages the columns of C to sum to one (i.e., it is a hybrid between the conic case and the convex case). The plots illustrate the performance of the γ-shifted conic hull algorithm for γ = 10. After the shift, the performance more closely matches that of the convex and mutant X-RAY methods on TF-IDF features. Given these experimental results and the simplicity of the proposed convex and conic methods, we suggest that both methods should be added to practitioners’ toolboxes. In particular, the superior performance of the convex algorithm on text datasets, compared to X-RAY and the conic algorithm, seems to suggest that these types of “convex” factorizations may be more desirable for TF-IDF features. Acknowledgments Greg Van Buskirk and Ben Raichel were partially supported by NSF CRII Award-1566137. Nicholas Ruozzi was partially supported by DARPA Explainable Artificial Intelligence Program under contract number N66001-17- 2-4032 and NSF grant III-1527312
1. What is the main contribution of the paper regarding sparse approximate conic hulls? 2. What are the strengths and weaknesses of the proposed algorithms in the paper? 3. How does the reviewer assess the hardness results for approximate conic and convex hull problems? 4. What are the concerns regarding the formulation of the problems in the paper, especially regarding the choice of angular metric and the gamma-shifted conic version? 5. How does the paper compare to existing works in convex geometry and NMF? 6. Are there any minor comments or suggestions for improving the paper?
Review
Review The paper "Sparse approximate conic hulls" develops conic analogues of approximation problems in convex geometry, hardness results for approximate convex and conic hulls, and considers these in the context of non-negative matrix factorization. The paper also presents numerical results comparing the approximate conic hull and convex hull algorithms, a modified approximate conic hull algorithm (obtained by first translating the data), and other existing algorithms for a feature-selection problem. Numerical results are also presented. The first theoretical contribution is a conic variant on the (constructive) approximate Caratheodory theorem devised for the convex setting. This is obtained by transforming the rays (from the conic problem) into a set of vectors by the "gnomic projection" applying the approximate Caratheodory theorem in the convex setting, and transforming back. The main effort is to ensure the error behavior can be controlled when the gnomic projection/its inverse are applied. This requires the points to have angle to the "center" strictly smaller than pi/2. This kind of condition on the points persists throughout the "conic" results in the paper. The second main contribution is establishing hardness results for approximate conic and convex hull problems. Finally, the main point of the paper seems to be that the epsilon-approximate conic hull algorithm can be used to approximately solve NMF under the column subset restriction. The paper is quite well written, albeit a little dense, with much of the core technical content relegated to the supplementary material. The results are interesting from a purely convex geometry point of view. Nevertheless, I am somewhat unconvinced about certain issues in the formulation of the problems in the paper (rather than the technical results of the paper, which are nice): -- the choice of angular metric seems a bit arbitrary (there are other ways to put a distance on the non-negative orthant that is naturally adapted to the conic setting, such as the Hilbert metric. Perhaps the angular metric is best suited to using Frobenius norm error in the matrix factorization problem? If so it would be great if the authors could make this more clear. -- The gamma-shifted conic version performs well in experiments (and the original conic version does not), which is interesting. How should we choose the shift though? What reason is there not to make the shift very large? Is it possible to "pull back" the gamma shift throughout the paper, and formulate a meaningful version of approximate Caratheodory that has a parameter gamma? Perhaps choosing the best-case over gamma is a more interesting formulation (from a practical point of view) than the vanilla conic version studied in this paper. In addition to these concerns, I'm not sure how much innovation there is over the existing convex method of Blum et al, and over the existing hardness results for NMF in Arora et al. (I am not expert enough to know for sure, but the paper feels a little incremental). Minor comments: -- p3 line 128: this is not the usual definition of "extreme" point/ray in convex geometry (although lines 134-136 in some sense deal with the difference between the usual definition and the definition in this paper, via the notion of "degenerate") -- p4 line 149: one could also consider using the "base" of the cone defined by intersection with any hyperplane defined by a vector in the interior of the dual cone of the points (not just the all ones vector) instead of using the gnomic projection. This preserves the extreme points, so the basic strategy might work, but perhaps doesn't play nicely with Frobenius norm error in the NMF problem? It would be helpful if the authors could briefly explain why this approach is not favorable, and why the gnomic projection approach makes the most sense. -- p5 line 216, theorem 3.4: It would be useful if the authors describe the dependence on gamma in the parameters of the theorem. It seems this will be very bad as gamma approaches pi/2, and it would be good to be upfront about this (since the authors do a good job of making clear that for the problem setup in this paper the bounded angle assumption is necessary) -- p6 line 235, there is some strange typesetting with the reference [16] in the Theorem statement. -- p6 line 239: again the dependence on gamma would be nice to have here (or in a comment afterwards). -- p6 line 253: it may be useful to add a sentence to say how this "non-standard" version of d-sum differs from the "standard" version, for the non-expert reader. -- p8 lines 347-348: this is an interesting observation, that the gamma-shifted conic case is a sort of interpolant between the convex and conic cases. It would be interesting to be able to automatically tune gamma for a given scenario.
NIPS
Title Sparse Approximate Conic Hulls Abstract We consider the problem of computing a restricted nonnegative matrix factorization (NMF) of an m × n matrix X . Specifically, we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and C ∈ Rk×n ≥0 . Equivalently, given the matrix X , consider the problem of finding a small subset, S, of the columns of X such that the conic hull of S ε-approximates the conic hull of the columns of X , i.e., the distance of every column of X to the conic hull of the columns of S should be at most an ε-fraction of the angular diameter of X . If k is the size of the smallest ε-approximation, then we produce an O(k/ε) sized O(ε)-approximation, yielding the first provable, polynomial time ε-approximation for this class of NMF problems, where also desirably the approximation is independent of n and m. Furthermore, we prove an approximate conic Carathéodory theorem, a general sparsity result, that shows that any column of X can be ε-approximated with an O(1/ε) sparse combination from S. Our results are facilitated by a reduction to the problem of approximating convex hulls, and we prove that both the convex and conic hull variants are d-SUM-hard, resolving an open problem. Finally, we provide experimental results for the convex and conic algorithms on a variety of feature selection tasks. 1 Introduction Matrix factorizations of all sorts (SVD, NMF, CU, etc.) are ubiquitous in machine learning and computer science. In general, given an m× n matrix X , the goal is to find a decomposition into a product of two matrices B ∈ Rm×k and C ∈ Rk×n such that the Frobenius norm between X and BC is minimized. If no further restrictions are placed on the matrices B and C, this problem can be solved optimally by computing the singular value decomposition. However, imposing restrictions on B and C can lead to factorizations which are more desirable for reasons such as interpretability and sparsity. One of the most common restrictions is non-negative matrix factorization (NMF), requiring B and C to consist only of non-negative entries (see [Berry et al., 2007] for a survey). Practically, NMF has seen widespread usage as it often produces nice factorizations that are frequently sparse. Typically NMF is accomplished by applying local search heuristics, and while NMF can be solved exactly in certain cases (see [Arora et al., 2016]), in general NMF is not only NP-hard [Vavasis, 2009] but also d-SUM-hard [Arora et al., 2016]. One drawback of factorizations such as SVD or NMF is that they can represent the data using a basis that may have no clear relation to the data. CU decompositions [Mahoney and Drineas, 2009] address this by requiring the basis to consist of input points. While it appears that the hardness of this problem has not been resolved, approximate solutions are known. Most notable is the additive approximation of Frieze et al. [2004], though more recently there have been advances on the multiplicative front [Drineas et al., 2008, Çivril and Magdon-Ismail, 2012, Guruswami and Sinop, 2012]. Similar restrictions have also been considered for NMF. Donoho and Stodden [2003] introduced a separability 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. assumption for NMF, and Arora et al. [2016] showed that a NMF can be computed in polynomial time under this assumption. Various other methods have since been proposed for NMF under the separability (or near separability) assumption [Recht et al., 2012, Kumar et al., 2013, Benson et al., 2014, Gillis and Vavasis, 2014, Zhou et al., 2014, Kumar and Sindhwani, 2015]. The separability assumption requires that there exists a subset S of the columns of X such that X = XSC for some nonnegative matrix C. This assumption can be restrictive in practice, e.g., when an exact subset does not exist but a close approximate subset does, i.e., X ≈ XSC. To our knowledge, no exact or approximate polynomial time algorithms have been proposed for the general problem of computing a NMF under only the restriction that the columns must be selected from those of X . In this work, we fill this gap by arguing that a simple greedy algorithm can be used to provide a polynomial time ε-approximation algorithm for NMF under the column subset restriction. Note that the separability assumption is not required here: our theoretical analysis bounds the error of our selected columns versus the best possible columns that could have been chosen. The algorithm is based off of recent work on fast algorithms for approximately computing the convex hull of a set of points [Blum et al., 2016]. As in previous approaches [Donoho and Stodden, 2003, Kumar et al., 2013], we formulate restricted NMF geometrically as finding a subset, S, of the columns of the matrix X whose conic hull, the set of all nonnegative combinations of columns of S, well-approximates the conic hull of X . Using gnomonic projection, we reduce the conic hull problem to a convex hull problem and then apply the greedy strategy of Blum et al. [2016] to compute the convex hull of the projected points. Given a set of points P in Rm, the convex hull of S ⊆ P , denoted Convex(S), is said to ε-approximate Convex(P ) if the Hausdorff distance between Convex(S) and Convex(P ) is at most ε · diameter(P ). For a fixed ε > 0, suppose the minimum sized subset of P whose convex hull ε-approximates the convex hull of P has size k, then Blum et al. [2016] show that a simple greedy algorithm gives an ε′ = O(ε1/3) approximation using at most k′ = O(k/ε2/3) points of P , with an efficient O(nc(m+ c/ε2 + c2)) running time, where c = O(kopt/ε2/3). By careful analysis, we show that our reduction achieves the same guarantees for the conic problem. (Note Blum et al. [2016] present other trade-offs between k′ and ε′, which we argue carry to the conic case as well). Significantly, k′ and ε′ are independent of n and m, making this algorithm desirable for large high dimensional point sets. Note that our bounds on the approximation quality and the number of points do not explicitly depend on the dimension as they are relative to the size of the optimal solution, which itself may or may not depend on dimension. Like the X-RAY algorithm [Kumar et al., 2013], our algorithm is easy to parallelize, allowing it to be applied to large-scale problems. In addition to the above ε-approximation algorithm, we also present two additional theoretical results of independent interest. The first theoretical contribution provides justification for empirical observations about the sparsity of NMF [Lee and Seung, 1999, Ding et al., 2010]. Due to the high dimensional nature of many data sets, there is significant interest in sparse representations requiring far fewer points than the dimension. Our theoretical justification for sparsity is based on Carathéodory’s theorem: any point q in the convex hull of P can be expressed as a convex combination of at most m+ 1 points from P . This is tight in the worst case for exact representation, however the approximate Carathéodory theorem [Clarkson, 2010, Barman, 2015] states there is a point q′ which is a convex combination of O(1/ε2) points of P (i.e., independent of n and m) such that ||q − q′|| ≤ ε · diameter(P ). This result has a long history with significant implications in machine learning, e.g., relating to the analysis of the perceptron algorithm [Novikoff, 1962], though the clean geometric statement of this theorem appears to be not well known outside the geometry community. Moreover, this approximation is easily computable with a greedy algorithm (e.g., [Blum et al., 2016]) similar to the Frank-Wolfe algorithm. The analogous statement for the linear case does not hold, so it is not immediately obvious whether such an approximate Carathéodory theorem should hold for the conic case, a question which we answer in the affirmative. As a second theoretical contribution, we address the question of whether or not the convex/conic hull problems are actually hard, i.e., whether approximations are actually necessary. We answer this question for both problems in the affirmative, resolving an open question of Blum et al. [2016], by showing both that the conic and convex problems are d-SUM-hard. Finally, we evaluate the performance of the greedy algorithms for computing the convex and conic hulls on a variety of feature selection tasks against existing methods. We observe that, both the conic and convex algorithms perform well for a variety of feature selection tasks, though, somewhat surprisingly, the convex hull algorithm, for which previously no experimental results had been produced, yields consistently superior results on text datasets. We use our theoretical results to provide intuition for these empirical observations. 2 Preliminaries Let P be a point set in Rm. For any p ∈ P , we interchangeably use the terms vector and point, depending on whether or not we wish to emphasize the direction from the origin. Let ray(p) denote the unbounded ray passing through p, whose base lies at the origin. Let unit(p) denote the unit vector in the direction of p, or equivalently unit(p) is the intersection of ray(p) with the unit hypersphere S(m−1). For any subset X = {x1, . . . , xk} ⊆ P , ray(X) = {ray(x1), . . . , ray(xk)} and unit(X) = {unit(x1), . . . , unit(xk)}. Given points p, q ∈ P , let d(p, q) = ||p−q|| denote their Euclidean distance, and let 〈p, q〉 denote their dot product. Let angle(ray(p), ray(q)) = angle(p, q) = cos−1(〈unit(p), unit(q)〉) denote the angle between the rays ray(p) and ray(q), or equivalently between vectors p and q. For two sets, P,Q ⊆ Rm, we write d(P,Q) = minp∈P,q∈Q d(p, q) and for a single point q we write d(q, P ) = d({q}, P ), and the same definitions apply to angle(). For any subset X = {x1, . . . , xk} ⊆ P , let Convex(X) = { ∑ i αixi | αi ≥ 0, ∑ i αi = 1} denote the convex hull of X . Similarly, let Conic(X) = {∑i αixi | αi ≥ 0} denote the conic hull of X and DualCone(X) = {z ∈ X | 〈x, z〉 ≥ 0 ∀x ∈ X} the dual cone. For any point q ∈ Rm, the projection of q onto Convex(X) is the closest point to q in Convex(X), proj(q) = proj(q,Convex(X)) = arg minx∈Convex(X) d(q, x). Similarly the angular projection of q onto Conic(X) is the angularly closest point to q in Conic(X), aproj(q) = aproj(q,Conic(X)) = arg minx∈Conic(X) angle(q, x). Note that angular projection defines an entire ray of Conic(X), rather than a single point, which without loss of generality we choose the point on the ray minimizing the Euclidean distance to q. In fact, abusing notation, we sometimes equivalently view Conic(X) as a set of rays rather than points, in which case aproj(ray(q)) = aproj(q) is the entire ray. For X ⊂ Rm, let ∆ = ∆X = maxp,q∈X d(p, q) denote the diameter of X . The angular diameter of X is φ = φX = maxp,q∈X angle(p, q). Similarly φX(q) = maxp∈X angle(p, q) denotes the angular radius of the minimum radius cone centered around the ray through q and containing all of P . Definition 2.1. Consider a subsetX of a point set P ⊂ Rm. X is an ε-approximation to Convex(P ) if dconvex(X,P ) = maxp∈Convex(P ) d(p,Convex(X)) ≤ ε∆. Note dconvex(X,P ) is the Hausdorff distance between Convex(X) and Convex(P ). Similarly X is an ε-approximation to Conic(P ) if dconic(X,P ) = maxp∈Conic(P ) angle(p,Conic(X)) ≤ εφP . Note that the definition of ε-approximation for Conic(P ) uses angular rather than Euclidean distance in order to be defined for rays, i.e., scaling a point outside the conic hull changes its Euclidean distance but its angular distance is unchanged since its ray stays the same. Thus we find considering angles better captures what it means to approximate the conic hull than the distance based Frobenius norm which is often used to evaluate the quality of approximation for NMF. As we are concerned only with angles, without loss of generality we often will assume that all points in the input set P have been scaled to have unit length, i.e., P = unit(P ). In our theoretical results, we will always assume that φP < π/2. Note that if P lies in the non-negative orthant, then for any strictly positive q, φP (q) < π/2. In the case that the P is not strictly inside the positive orthant, the points can be uniformly translated a small amount to ensure that φP < π/2. 3 A Simple Greedy Algorithm Let P be a finite point set in Rm (with unit lengths). Call a point p ∈ P extreme if it lies on the boundary of the conic hull (resp. convex hull). Observe that for any X ⊆ P , containing all the extreme points, it holds that Conic(X) = Conic(P ) (resp. Convex(X) = Convex(P )). Consider the simple greedy algorithm which builds a subset of points S, by iteratively adding to S the point angularly furthest from the conic hull of the current point set S (for the convex hull take the furthest point in distance). One can argue in each round this algorithm selects an extreme point, and thus can be used to find a subset of points whose hull captures that of P . Note if the hull is not degenerate, i.e., no point on the boundary is expressible as a combination of other points on the boundary, then this produces the minimum sized subset capturing P . Otherwise, one can solve a recursive subproblem as discussed by Kumar et al. [2013] to exactly recover S. Here instead we consider finding a small subset of points (potentially much smaller than the number of extreme points) to approximate the hull. The question is then whether this greedy approach still yields a reasonable solution, which is not clear as there are simple examples showing the best approximate subset includes non-extreme points. Moreover, arguing about the conic approximation directly is challenging as it involves angles and hence spherical (rather than planar) geometry. For the convex case, Blum et al. [2016] argued that this greedy strategy does yield a good approximation. Thus we seek a way to reduce our conic problem to an instance of the convex problem, without introducing too much error in the process, which brings us to the gnomonic projection. Let hplane(q) be the hyperplane defined by the equation 〈(q − x), q〉 = 0 where q ∈ Rm is a unit length normal vector. The gnomonic projection of P onto hplane(q), is defined as gpq(P ) = {ray(P )∩ hplane(q)} (see Figure 3.1). Note that gpq(q) = q. For any point x in hplane(q), the inverse gnomonic projection is pgq(x) = ray(x)∩ S(m−1). Similar to other work [Kumar et al., 2013], we allow projections onto any hyperplane tangent to the unit hypersphere with normal q in the strictly positive orthant. A key property of the gnomonic projection, is that the problem of finding the extreme points of the convex hull of the projected points is equivalent to finding the extreme points of the conic hull of P . (Additional properties of the gnomonic projection are discussed in the full version.) Thus the strategy to approximate the conic hull should now be clear. Let P ′ = gpq(P ). We apply the greedy strategy of Blum et al. [2016] to P ′ to build a set of extreme points S, by iteratively adding to S the point furthest from the convex hull of the current point set S. This procedure is shown in Algorithm 1. We show that Algorithm 1 can be used to produce an ε-approximation to the restricted NMF problem. Formally, for ε > 0, let opt(P, ε) denote any minimum cardinality subset X ⊆ P which ε-approximates Conic(P ), and let kopt = |opt(P, ε)|. We consider the following problem. Problem 3.1. Given a set P of n points in Rm such that φP ≤ π/2− γ, for a constant γ > 0, and a value ε > 0, compute opt(P, ε). Alternatively one can fix k rather than ε, defining opt(P, k) = arg minX⊆P,|X|=k dconic(X,P ) and εopt = dconic(opt(P, k), P ). Our approach works for either variant, though here we focus on the version in Problem 3.1. Note the bounded angle assumption applies to any collection of points in the strictly positive orthant (a small translation can be used to ensure this for any nonnegative data set). In this section we argue Algorithm 1 produces an (α, β)-approximation to an instance (P, ε) of Problem 3.1, that is a subset X ⊆ P such that dconic(X,P ) ≤ α and |X| ≤ β ·kopt = β · |opt(P, ε)|. For ε > 0, similarly define optconvex(P, ε) to be any minimum cardinality subset X ⊆ P which ε-approximates Convex(P ). Blum et al. [2016] gave (α, β)-approximation for the following. Problem 3.2. Given a set P of n points in Rm, and a value ε > 0, compute optconvex(P, ε). Note the proofs of correctness and approximation quality from Blum et al. [2016] for Problem 3.2 do not immediately imply the same results for using Algorithm 1 for Problem 3.1. To see this, consider any points u, v on S(m−1). Note the angle between u and v is the same as their geodesic distance on S(m−1). Intuitively, we want to claim the geodesic distance between u and v is roughly the same as the Euclidean distance between gpq(u) and gpq(v). While this is true for points near q, as we move away from q the correspondence breaks down (and is unbounded as you approach π/2). This non-uniform distortion requires care, and thus the proofs had to be moved to the full version. Finally, observe that Algorithm 1, requires being able to compute the point furthest from the convex hull. To do so we use the (convex) approximate Carathéodory, which is both theoretically and practically very efficient, and produces provably sparse solutions. As a stand alone result, we first prove the conic analog of the approximate Carathéodory theorem. This result is of independent interest since it can be used to sparsify the returned solution from Algorithm 1, or any other algorithm. 3.1 Sparsity and the Approximate Conic Carathéodory Theorem Our first result is a conic approximate Carathéodory theorem. That is, given a point set P ⊆ Rm and a query point q, then the angularly closest point to q in Conic(P ) can be approximately expressed as q x x′hplane(q) a sparse combination of point from P . More precisely, one can compute a point t which is a conic combination of O(1/ε2) points from P such that angle(q, t) ≤ angle(q,Conic(P )) + εφP . The significance of this result is as follows. Recall that we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and the entries of C are non-negative. Ideally each point in X is expressed as a sparse combination from the basis B, that is each column of C has very few non-zero entries. So suppose we are given any factorization BC, but C is dense. Then no problem, just throw out C, and use our Carathéodory theorem to compute a new matrix C ′ with sparse columns. Namely treat each column of X as the query q and run the theorem for the point set P = B, and then the non-zero entries of corresponding column of C ′ are just the selected combination from B. Not only does this mean we can sparsify any solution to our NMF problem (including those obtained by other methods), but it also means conceptually that rather than finding a good pair BC, one only needs to focus on finding the subset B, as is done in Algorithm 1. Note that Algorithm 1 allows non-negative inputs in P because φP < π/2 ensures P can be rotated into the positive orthant. While it appears the conic approximate Carathéodory theorem had not previously been stated, the convex version has a long history (e.g., implied by [Novikoff, 1962]). The algorithm to compute this sparse convex approximation is again a simple and fast greedy algorithm, which roughly speaking is a simplification of the Frank-Wolfe algorithm for this particular problem. Specifically, to find the projection of q onto Convex(P ), start with any point t0 ∈ Convex(P ). In the ith round, find the point pi ∈ P most extreme in the direction of q from ti−1 (i.e., maximizing 〈q − ti−1, pi〉) and set ti to be the closest point to q on the segment ti−1pi (thus simplifying Frank Wolfe, as we ignore step size issues). The standard analysis of this algorithm (e.g., [Blum et al., 2016]) gives the following. Theorem 3.3 (Convex Carathéodory). For a point set P ⊆ Rm, ε > 0, and q ∈ Rm, one can compute, in O ( |P |m/ε2 ) time, a point t ∈ Convex(P ), such that d(q, t) ≤ d(q,Convex(P )) + ε∆, where ∆ = ∆P . Furthermore, t is a convex combination of O(1/ε2) points of P . Again by exploiting properties of the gnomonic projection we are able to prove a conic analog of the above theorem. Note for P ⊂ Rm, P is contained in the linear span of at most m points from P , and similarly the exact Carathéodory theorem states any point q ∈ Convex(P ) is expressible as a convex combination of at most m+ 1 points from P . As the conic hull lies between the linear case (with all combinations) and the convex case (with non-negative combinations summing to one), it is not surprising an exact conic Carathéodory theorem holds. However, the linear analog of the approximate convex Caratheodory theorem does not hold, and so the following conic result is not a priori obvious. Theorem 3.4. Let P ⊂ Rm be a point set, let q be such that φP (q) < π/2− γ for some constant γ > 0, and let ε > 0 be a parameter. Then one can find, in O(|P |m/ε2) time, a point t ∈ Conic(P ) such that angle(q, t) ≤ angle(q,Conic(P ))+εφP (q). Moreover, t is a conic combination ofO(1/ε2) points from P . Due to space constraints, the detailed proof of Theorem 3.4 appears in the full version. In the proof, the dependence on γ is made clear but we make a remark about it here. If ε is kept fixed, γ shows up in the running time roughly by a factor of tan2(π/2− γ). Alternatively, if the running time is fixed, the approximation error will roughly depend on the factor 1/ tan(π/2− γ). We now give a simple example of a high dimensional point set which shows our bounded angle assumption is required for the conic Carathéodory theorem to hold. Let P consist of the standard basis vectors in Rm, let q be the all ones vector, and let ε be a parameter. Let X be a subset of P of size k, and consider aproj(q) = aproj(q,X). As P consists of basis vectors, each of which have all but one entry set to zero, aproj(q) will have at most k non-zero entries. By the symmetry of q it is also clear that all non-zero entries in aproj(q) should have the same value. Without loss of generality assume that this value is 1, and hence the magnitude of aproj(q) is √ k. Thus for aproj(q) to be an ε-approximation to q, angle(aproj(q), q) = cos−1( k√ k √ m ) = cos−1( √ k/m) < ε. Hence for a fixed ε, the number of points required to ε-approximate q depends on m, while the conic Carathéodory theorem should be independent of m. 3.2 Approximating the Conic Hull We now prove that Algorithm 1 yields an approximation to the conic hull of a given point set and hence an approximation to the nonnegative matrix factorization problem. As discussed above, previously Blum et al. [2016] provided the following (α, β)-approximation for Problem 3.2. Theorem 3.5 ([Blum et al., 2016]). For a set P of n points in Rm, and ε > 0, the greedy strategy, which iteratively adds the point furthest from the current convex hull, gives a ((8ε1/3 + ε)∆, O(1/ε2/3))-approximation to Problem 3.2, and has running time O(nc(m + c/ε2 + c2)) time, where c = O(kopt/ε2/3). Our second result, is a conic analog of the above theorem. Theorem 3.6. Given a set P of n points in Rm such that φP ≤ π2 − γ for a constant γ > 0, and a value ε > 0, Algorithm 1 gives an ((8ε1/3 + ε)φP , O(1/ε2/3))-approximation to Problem 3.1, and has running time O(nc(m+ c/ε2 + c2)), where c = O(kopt/ε2/3). Bounding the approximation error requires carefully handling the distortion due to the gnomonic project, and the details are presented in the full version. Additionally, Blum et al. [2016] provide other (α, β)-approximations, for different values of α and β, and in the full version these other results are also shown to hold for the conic case. 4 Hardness of the Convex and Conic Problems This section gives a reduction from d-SUM to the convex approximation of Problem 3.2, implying it is d-SUM-hard. In the full version a similar setup is used to argue the conic approximation of Problem 3.1 is d-SUM-hard. Actually if Problem 3.1 allowed instances where φP = π/2 the reduction would be virtually the same. However, arguing that the problem remains hard under our requirement that φP ≤ π/2− γ, is non-trivial and some of the calculations become challenging and lengthy. The reductions to both problems are partly inspired by Arora et al. [2016]. However, here, we use the somewhat non-standard version of d-SUM where repetitions are allowed as described below. Problem 4.1 (d-SUM). In the d-SUM problem we are given a set S = {s1, s2, · · · , sN} of N values, each in the interval [0, 1], and the goal is to determine if there is a set of d numbers (not necessarily distinct) whose sum is exactly d/2. It was shown by Patrascu and Williams [2010] that if d-SUM can be solved in No(d) time then 3-SAT has a sub-exponential time algorithm, i.e., that the Exponential Time Hypothesis is false. Theorem 4.2 (d-SUM-hard). Let d < N0.99, δ < 1. If d-SUM on N numbers of O(d log(N)) bits can be solved in O(Nδd) time, then 3-SAT on n variables can be solved in 2o(n) time. We will prove the following decision version of Problem 3.2 is d-SUM-hard. Note in this section the dimension will be denoted by d rather than m, as this is standard for d-SUM reductions. Problem 4.3. Given a set P of n points in Rd, a value ε > 0, and an integer k, is there a subset X ⊆ P of k points such that dconvex(X,P ) ≤ ε∆, where ∆ is the diameter of P . Given an instance of d-SUM with N values S = {s1, s2, · · · , sN} we construct an instance of Problem 4.3 where P ⊂ Rd+2, k = d, and ε = 1/3 (or any sufficiently small value). The idea is to create d clusters each containing N points corresponding to a choice of one of the si values. The clusters are positioned such that exactly one point from each cluster must be chosen. The d + 2 coordinates are labeled ai for i ∈ [d], w, and v. Together, a1, · · · , ad determine the cluster. The w dimension is used to compute the sum of the chosen si values. The v dimension is used as a threshold to determine whether d-SUM is a yes or no instance to Problem 4.3. Let w(pj) denote the w value of an arbitrary point pj . We assume d ≥ 2 as d-SUM is trivial for d = 1. Let e1, e2, · · · , ed ∈ Rd be the standard basis in Rd, e1 = (1, · · · , 0), e2 = (0, 1, · · · , 0), . . . , and ed = (0, · · · , 1). Together they form the unit d-simplex, and they define the d clusters in the construction. Finally, let ∆∗ = √ 2 + (εsmax − εsmin)2 be a constant where smax and smin are, respectively, the maximum and minimum values in S. Definition 4.4. The set of points P ⊂ Rd+2 are the following pij points: For each i ∈ [d], j ∈ [N ], set (a1, · · · , ad) = ei, w = εsj and v = 0 q point: For each i ∈ [d], ai = 1/d, w = ε/2, v = 0 q′ point: For each i ∈ [d], ai = 1/d and w = ε/2, v = ε∆∗ Lemma 4.5 (Proof in full version). The diameter of P , ∆P , is equal to ∆∗. We prove completeness and soundness of the reduction. Below P i = ∪j pij denotes the ith cluster. Observation 4.6. If maxp∈P d(p,Convex(X)) ≤ ε∆, then dconvex(X,P ) ≤ ε∆: For point sets A and B = {b1, . . . , bm}, if we fix a ∈ Convex(A), then for any b ∈ Convex(B) we have ||a− b|| = ||a−∑i αibi|| = ||∑i αi(a− bi)|| ≤∑i αi||a− bi|| ≤ maxi ||a− bi||. Lemma 4.7 (Completeness). If there is a subset {sk1 , sk2 , · · · , skd} of d values (not necessarily distinct) such that ∑ i∈[d] ski = d/2, then the above described instance of Problem 4.3 is a true instance, i.e. there is a d sized subset X ⊆ P with dconvex(X,P ) ≤ ε∆. Proof: For each value ski consider the point xi = (ei, ε · ski , 0), which by Definition 4.4 is a point in P . Let X = {x1, . . . , xd}. We now prove maxp∈P d(p,Convex(X)) ≤ ε∆, which by Observation 4.6 implies that dconvex(X,P ) ≤ ε∆. First observe that for any pij in P , d(p i j , xi) = √ (w(pij)− w(xi))2 ≤ |εsj − εski | ≤ ε∆. The only other points in P are q and q′. Note that d(q, q′) = ε∆∗ = ε∆ from Lemma 4.5. Thus if we can prove that q ∈ Convex(X) then we will have shown maxp∈P d(p,Convex(X)) ≤ ε∆. Specifically, we prove that the convex combination x = 1d ∑d i xi is the point q. As X contains exactly one point from each set P i, and in each such set all points have ai = 1 and all other aj = 0, it holds that x has 1/d for all the a coordinates. All points in X have v = 0 and so this holds for x as well. Thus we only need to verify that w(x) = w(q) = ε/2, for which we have w(x) = 1d ∑ i w(xi) = 1 d ∑ i εski = 1 d (εd/2) = ε/2. Proving soundness requires some helper lemmas. Note that in the above proof we constructed a solution to Problem 4.3 that selected exactly one point from each cluster P i. We now prove that this is a required property. Lemma 4.8 (Proof in full version). Let P ⊂ Rd+2 be as defined above, and let X ⊆ P be a subset of size d. If dconvex(X,P ) ≤ ε∆, then for all i, X contains exactly one point from P i. Lemma 4.9 (Proof in full version). If dconvex(X,P ) ≤ ε∆, then q ∈ Convex(X) and moreover q = 1d ∑ xi∈X xi. Lemma 4.10 (Soundness). Let P be an instance of Problem 4.3 generated from a d-SUM instance S, as described in Definition 4.4. If there is a subset X ⊆ P of size d such that dconvex(X,P ) ≤ ε∆, then there is a choice of d values from S that sum to exactly d/2. Proof: From Lemma 4.8 we know that X consist of exactly one point from each cluster P i. Thus for each xi ∈ X , w(xi) = εski for some ski ∈ S. By Lemma 4.9, q = 1d ∑ i xi, which implies w(q) = 1d ∑ i w(xi). By Definition 4.4w(q) = ε/2, which implies ε/2 = 1 d ∑ i w(xi) = 1 d ∑ i εski . Thus we have a set {sk1 , . . . , skd} of d values from S such that ∑ i ski = d/2. Lemma 4.7 and Lemma 4.10 immediately imply the following. Theorem 4.11. For point sets in Rd+2, Problem 4.3 is d-SUM-hard. 5 Experimental Results We report an experimental comparison of the proposed greedy algorithm for conic hulls, the greedy algorithm for convex hulls (the conic hull algorithm without the projection step) [Blum et al., 2016], the X-RAY (max) algorithm [Kumar et al., 2013], a modified version of X-RAY, dubbed mutant X-RAY, which simply selects the point furthest away from the current cone (i.e., with the largest residual), and a γ-shifted version of the conic hull algorithm described below. Other methods such as Hottopixx [Recht et al., 2012, Gillis and Luce, 2014] and SPA [Gillis and Vavasis, 2014] were not included due to their similar performance to the above methods. For our experiments, we considered the performance of each of the methods when used to select features for a variety of SVM classification tasks on various image, text, and speech data sets including several from the Arizona State University feature selection repository [Li et al., 2016] as well as the UCI Reuters dataset and the BBC News dataset [Greene and Cunningham, 2006]. The Reuters and BBC text datasets are represented using the TF-IDF representation. For the Reuters dataset, only the ten most frequent topics were used for classification. In all datasets, columns (corresponding to features) that were identically equal to zero were removed from the data matrix. For each problem, the data is divided using a 30/70 train/test split, the features are selected by the indicated method, and then an SVM classifier is trained using only the selected features. For the conic and convex hull methods, is set to 0.1. The accuracy (percent of correctly classified instances) is plotted versus the number of selected features for each method in Figure 4.1. Additional experimental results can be found in the full version. Generally speaking, the convex, mutant X-RAY, and shifted conic algorithms seem to consistently perform the best on the tasks. The difference in performance between convex and conic is most striking on the two text data sets Reuters and BBC. In the case of BBC and Reuters, this is likely due to the fact that many of the columns of the TF-IDF matrix are orthogonal. We note that the quality of both X-RAY and conic is improved if thresholding is used when constructing the feature matrix, but they still seem to under perform the convex method for text datasets. The text datasets are also interesting as not only do they violate the explicit assumption in our theorems that the angular diameter of the conic hull be strictly less than π/2, but that there are many such mutually orthogonal columns of the document-feature matrix. This observation motivates the γ-shifted version of the conic hull algorithm that simply takes the input matrix X and adds γ to all of the entries (essentially translating the data along the all ones vector) and then applies the conic hull algorithm. Let 1a,b denote the a × b matrix of ones. After a nonnegative shift, the angular assumption is satisfied, and the restricted NMF problem is that of approximating (X + γ1m,n) as (B + γ1m,k)C, where the columns of B are again chosen from those of X . Under the Frobenus norm ||(X + γ1m,n)− (B + γ1m,k)C||22 = ∑ i,j(Xij −Bi,:C:,j + γ(1− ||C:,j ||1))2. As C must be a nonnegative matrix, the shifted conic case acts like the original conic case plus a penalty that encourages the columns of C to sum to one (i.e., it is a hybrid between the conic case and the convex case). The plots illustrate the performance of the γ-shifted conic hull algorithm for γ = 10. After the shift, the performance more closely matches that of the convex and mutant X-RAY methods on TF-IDF features. Given these experimental results and the simplicity of the proposed convex and conic methods, we suggest that both methods should be added to practitioners’ toolboxes. In particular, the superior performance of the convex algorithm on text datasets, compared to X-RAY and the conic algorithm, seems to suggest that these types of “convex” factorizations may be more desirable for TF-IDF features. Acknowledgments Greg Van Buskirk and Ben Raichel were partially supported by NSF CRII Award-1566137. Nicholas Ruozzi was partially supported by DARPA Explainable Artificial Intelligence Program under contract number N66001-17- 2-4032 and NSF grant III-1527312
1. What is the focus of the paper in terms of computational complexity? 2. What is the novelty of the proposed algorithm compared to prior works? 3. What are the strengths of the paper regarding its theoretical analysis? 4. Are there any concerns regarding the algorithm's dependence on dimensionality? 5. How relevant are the results to the target audience?
Review
Review This paper provides an approximation algorithm for NMF. Specifically, the algorithm outputs few columns of the data matrix such that the conic hull of those columns is close to the conic hull of the columns of the entire matrix (in an appropriately defined metric). The main difference from existing works is that it does explicitly not assume there is a true model (i.e., few columns such that other columns are generated as combinations of those columns) or the separability assumption made in the existing works. In this sense, the results are model-free. The algorithm is based on gnomic projections and is heavily based on [16], with appropriate modifications. It is not clear why the algorithm does not depend explicitly on the dimensionality of the matrices. It would be better if the authors explain why clearly or point out which assumption leads to this effect. The results provided are interesting and would be of interest to the community. Hence I propose to accept the paper.
NIPS
Title Sparse Approximate Conic Hulls Abstract We consider the problem of computing a restricted nonnegative matrix factorization (NMF) of an m × n matrix X . Specifically, we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and C ∈ Rk×n ≥0 . Equivalently, given the matrix X , consider the problem of finding a small subset, S, of the columns of X such that the conic hull of S ε-approximates the conic hull of the columns of X , i.e., the distance of every column of X to the conic hull of the columns of S should be at most an ε-fraction of the angular diameter of X . If k is the size of the smallest ε-approximation, then we produce an O(k/ε) sized O(ε)-approximation, yielding the first provable, polynomial time ε-approximation for this class of NMF problems, where also desirably the approximation is independent of n and m. Furthermore, we prove an approximate conic Carathéodory theorem, a general sparsity result, that shows that any column of X can be ε-approximated with an O(1/ε) sparse combination from S. Our results are facilitated by a reduction to the problem of approximating convex hulls, and we prove that both the convex and conic hull variants are d-SUM-hard, resolving an open problem. Finally, we provide experimental results for the convex and conic algorithms on a variety of feature selection tasks. 1 Introduction Matrix factorizations of all sorts (SVD, NMF, CU, etc.) are ubiquitous in machine learning and computer science. In general, given an m× n matrix X , the goal is to find a decomposition into a product of two matrices B ∈ Rm×k and C ∈ Rk×n such that the Frobenius norm between X and BC is minimized. If no further restrictions are placed on the matrices B and C, this problem can be solved optimally by computing the singular value decomposition. However, imposing restrictions on B and C can lead to factorizations which are more desirable for reasons such as interpretability and sparsity. One of the most common restrictions is non-negative matrix factorization (NMF), requiring B and C to consist only of non-negative entries (see [Berry et al., 2007] for a survey). Practically, NMF has seen widespread usage as it often produces nice factorizations that are frequently sparse. Typically NMF is accomplished by applying local search heuristics, and while NMF can be solved exactly in certain cases (see [Arora et al., 2016]), in general NMF is not only NP-hard [Vavasis, 2009] but also d-SUM-hard [Arora et al., 2016]. One drawback of factorizations such as SVD or NMF is that they can represent the data using a basis that may have no clear relation to the data. CU decompositions [Mahoney and Drineas, 2009] address this by requiring the basis to consist of input points. While it appears that the hardness of this problem has not been resolved, approximate solutions are known. Most notable is the additive approximation of Frieze et al. [2004], though more recently there have been advances on the multiplicative front [Drineas et al., 2008, Çivril and Magdon-Ismail, 2012, Guruswami and Sinop, 2012]. Similar restrictions have also been considered for NMF. Donoho and Stodden [2003] introduced a separability 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. assumption for NMF, and Arora et al. [2016] showed that a NMF can be computed in polynomial time under this assumption. Various other methods have since been proposed for NMF under the separability (or near separability) assumption [Recht et al., 2012, Kumar et al., 2013, Benson et al., 2014, Gillis and Vavasis, 2014, Zhou et al., 2014, Kumar and Sindhwani, 2015]. The separability assumption requires that there exists a subset S of the columns of X such that X = XSC for some nonnegative matrix C. This assumption can be restrictive in practice, e.g., when an exact subset does not exist but a close approximate subset does, i.e., X ≈ XSC. To our knowledge, no exact or approximate polynomial time algorithms have been proposed for the general problem of computing a NMF under only the restriction that the columns must be selected from those of X . In this work, we fill this gap by arguing that a simple greedy algorithm can be used to provide a polynomial time ε-approximation algorithm for NMF under the column subset restriction. Note that the separability assumption is not required here: our theoretical analysis bounds the error of our selected columns versus the best possible columns that could have been chosen. The algorithm is based off of recent work on fast algorithms for approximately computing the convex hull of a set of points [Blum et al., 2016]. As in previous approaches [Donoho and Stodden, 2003, Kumar et al., 2013], we formulate restricted NMF geometrically as finding a subset, S, of the columns of the matrix X whose conic hull, the set of all nonnegative combinations of columns of S, well-approximates the conic hull of X . Using gnomonic projection, we reduce the conic hull problem to a convex hull problem and then apply the greedy strategy of Blum et al. [2016] to compute the convex hull of the projected points. Given a set of points P in Rm, the convex hull of S ⊆ P , denoted Convex(S), is said to ε-approximate Convex(P ) if the Hausdorff distance between Convex(S) and Convex(P ) is at most ε · diameter(P ). For a fixed ε > 0, suppose the minimum sized subset of P whose convex hull ε-approximates the convex hull of P has size k, then Blum et al. [2016] show that a simple greedy algorithm gives an ε′ = O(ε1/3) approximation using at most k′ = O(k/ε2/3) points of P , with an efficient O(nc(m+ c/ε2 + c2)) running time, where c = O(kopt/ε2/3). By careful analysis, we show that our reduction achieves the same guarantees for the conic problem. (Note Blum et al. [2016] present other trade-offs between k′ and ε′, which we argue carry to the conic case as well). Significantly, k′ and ε′ are independent of n and m, making this algorithm desirable for large high dimensional point sets. Note that our bounds on the approximation quality and the number of points do not explicitly depend on the dimension as they are relative to the size of the optimal solution, which itself may or may not depend on dimension. Like the X-RAY algorithm [Kumar et al., 2013], our algorithm is easy to parallelize, allowing it to be applied to large-scale problems. In addition to the above ε-approximation algorithm, we also present two additional theoretical results of independent interest. The first theoretical contribution provides justification for empirical observations about the sparsity of NMF [Lee and Seung, 1999, Ding et al., 2010]. Due to the high dimensional nature of many data sets, there is significant interest in sparse representations requiring far fewer points than the dimension. Our theoretical justification for sparsity is based on Carathéodory’s theorem: any point q in the convex hull of P can be expressed as a convex combination of at most m+ 1 points from P . This is tight in the worst case for exact representation, however the approximate Carathéodory theorem [Clarkson, 2010, Barman, 2015] states there is a point q′ which is a convex combination of O(1/ε2) points of P (i.e., independent of n and m) such that ||q − q′|| ≤ ε · diameter(P ). This result has a long history with significant implications in machine learning, e.g., relating to the analysis of the perceptron algorithm [Novikoff, 1962], though the clean geometric statement of this theorem appears to be not well known outside the geometry community. Moreover, this approximation is easily computable with a greedy algorithm (e.g., [Blum et al., 2016]) similar to the Frank-Wolfe algorithm. The analogous statement for the linear case does not hold, so it is not immediately obvious whether such an approximate Carathéodory theorem should hold for the conic case, a question which we answer in the affirmative. As a second theoretical contribution, we address the question of whether or not the convex/conic hull problems are actually hard, i.e., whether approximations are actually necessary. We answer this question for both problems in the affirmative, resolving an open question of Blum et al. [2016], by showing both that the conic and convex problems are d-SUM-hard. Finally, we evaluate the performance of the greedy algorithms for computing the convex and conic hulls on a variety of feature selection tasks against existing methods. We observe that, both the conic and convex algorithms perform well for a variety of feature selection tasks, though, somewhat surprisingly, the convex hull algorithm, for which previously no experimental results had been produced, yields consistently superior results on text datasets. We use our theoretical results to provide intuition for these empirical observations. 2 Preliminaries Let P be a point set in Rm. For any p ∈ P , we interchangeably use the terms vector and point, depending on whether or not we wish to emphasize the direction from the origin. Let ray(p) denote the unbounded ray passing through p, whose base lies at the origin. Let unit(p) denote the unit vector in the direction of p, or equivalently unit(p) is the intersection of ray(p) with the unit hypersphere S(m−1). For any subset X = {x1, . . . , xk} ⊆ P , ray(X) = {ray(x1), . . . , ray(xk)} and unit(X) = {unit(x1), . . . , unit(xk)}. Given points p, q ∈ P , let d(p, q) = ||p−q|| denote their Euclidean distance, and let 〈p, q〉 denote their dot product. Let angle(ray(p), ray(q)) = angle(p, q) = cos−1(〈unit(p), unit(q)〉) denote the angle between the rays ray(p) and ray(q), or equivalently between vectors p and q. For two sets, P,Q ⊆ Rm, we write d(P,Q) = minp∈P,q∈Q d(p, q) and for a single point q we write d(q, P ) = d({q}, P ), and the same definitions apply to angle(). For any subset X = {x1, . . . , xk} ⊆ P , let Convex(X) = { ∑ i αixi | αi ≥ 0, ∑ i αi = 1} denote the convex hull of X . Similarly, let Conic(X) = {∑i αixi | αi ≥ 0} denote the conic hull of X and DualCone(X) = {z ∈ X | 〈x, z〉 ≥ 0 ∀x ∈ X} the dual cone. For any point q ∈ Rm, the projection of q onto Convex(X) is the closest point to q in Convex(X), proj(q) = proj(q,Convex(X)) = arg minx∈Convex(X) d(q, x). Similarly the angular projection of q onto Conic(X) is the angularly closest point to q in Conic(X), aproj(q) = aproj(q,Conic(X)) = arg minx∈Conic(X) angle(q, x). Note that angular projection defines an entire ray of Conic(X), rather than a single point, which without loss of generality we choose the point on the ray minimizing the Euclidean distance to q. In fact, abusing notation, we sometimes equivalently view Conic(X) as a set of rays rather than points, in which case aproj(ray(q)) = aproj(q) is the entire ray. For X ⊂ Rm, let ∆ = ∆X = maxp,q∈X d(p, q) denote the diameter of X . The angular diameter of X is φ = φX = maxp,q∈X angle(p, q). Similarly φX(q) = maxp∈X angle(p, q) denotes the angular radius of the minimum radius cone centered around the ray through q and containing all of P . Definition 2.1. Consider a subsetX of a point set P ⊂ Rm. X is an ε-approximation to Convex(P ) if dconvex(X,P ) = maxp∈Convex(P ) d(p,Convex(X)) ≤ ε∆. Note dconvex(X,P ) is the Hausdorff distance between Convex(X) and Convex(P ). Similarly X is an ε-approximation to Conic(P ) if dconic(X,P ) = maxp∈Conic(P ) angle(p,Conic(X)) ≤ εφP . Note that the definition of ε-approximation for Conic(P ) uses angular rather than Euclidean distance in order to be defined for rays, i.e., scaling a point outside the conic hull changes its Euclidean distance but its angular distance is unchanged since its ray stays the same. Thus we find considering angles better captures what it means to approximate the conic hull than the distance based Frobenius norm which is often used to evaluate the quality of approximation for NMF. As we are concerned only with angles, without loss of generality we often will assume that all points in the input set P have been scaled to have unit length, i.e., P = unit(P ). In our theoretical results, we will always assume that φP < π/2. Note that if P lies in the non-negative orthant, then for any strictly positive q, φP (q) < π/2. In the case that the P is not strictly inside the positive orthant, the points can be uniformly translated a small amount to ensure that φP < π/2. 3 A Simple Greedy Algorithm Let P be a finite point set in Rm (with unit lengths). Call a point p ∈ P extreme if it lies on the boundary of the conic hull (resp. convex hull). Observe that for any X ⊆ P , containing all the extreme points, it holds that Conic(X) = Conic(P ) (resp. Convex(X) = Convex(P )). Consider the simple greedy algorithm which builds a subset of points S, by iteratively adding to S the point angularly furthest from the conic hull of the current point set S (for the convex hull take the furthest point in distance). One can argue in each round this algorithm selects an extreme point, and thus can be used to find a subset of points whose hull captures that of P . Note if the hull is not degenerate, i.e., no point on the boundary is expressible as a combination of other points on the boundary, then this produces the minimum sized subset capturing P . Otherwise, one can solve a recursive subproblem as discussed by Kumar et al. [2013] to exactly recover S. Here instead we consider finding a small subset of points (potentially much smaller than the number of extreme points) to approximate the hull. The question is then whether this greedy approach still yields a reasonable solution, which is not clear as there are simple examples showing the best approximate subset includes non-extreme points. Moreover, arguing about the conic approximation directly is challenging as it involves angles and hence spherical (rather than planar) geometry. For the convex case, Blum et al. [2016] argued that this greedy strategy does yield a good approximation. Thus we seek a way to reduce our conic problem to an instance of the convex problem, without introducing too much error in the process, which brings us to the gnomonic projection. Let hplane(q) be the hyperplane defined by the equation 〈(q − x), q〉 = 0 where q ∈ Rm is a unit length normal vector. The gnomonic projection of P onto hplane(q), is defined as gpq(P ) = {ray(P )∩ hplane(q)} (see Figure 3.1). Note that gpq(q) = q. For any point x in hplane(q), the inverse gnomonic projection is pgq(x) = ray(x)∩ S(m−1). Similar to other work [Kumar et al., 2013], we allow projections onto any hyperplane tangent to the unit hypersphere with normal q in the strictly positive orthant. A key property of the gnomonic projection, is that the problem of finding the extreme points of the convex hull of the projected points is equivalent to finding the extreme points of the conic hull of P . (Additional properties of the gnomonic projection are discussed in the full version.) Thus the strategy to approximate the conic hull should now be clear. Let P ′ = gpq(P ). We apply the greedy strategy of Blum et al. [2016] to P ′ to build a set of extreme points S, by iteratively adding to S the point furthest from the convex hull of the current point set S. This procedure is shown in Algorithm 1. We show that Algorithm 1 can be used to produce an ε-approximation to the restricted NMF problem. Formally, for ε > 0, let opt(P, ε) denote any minimum cardinality subset X ⊆ P which ε-approximates Conic(P ), and let kopt = |opt(P, ε)|. We consider the following problem. Problem 3.1. Given a set P of n points in Rm such that φP ≤ π/2− γ, for a constant γ > 0, and a value ε > 0, compute opt(P, ε). Alternatively one can fix k rather than ε, defining opt(P, k) = arg minX⊆P,|X|=k dconic(X,P ) and εopt = dconic(opt(P, k), P ). Our approach works for either variant, though here we focus on the version in Problem 3.1. Note the bounded angle assumption applies to any collection of points in the strictly positive orthant (a small translation can be used to ensure this for any nonnegative data set). In this section we argue Algorithm 1 produces an (α, β)-approximation to an instance (P, ε) of Problem 3.1, that is a subset X ⊆ P such that dconic(X,P ) ≤ α and |X| ≤ β ·kopt = β · |opt(P, ε)|. For ε > 0, similarly define optconvex(P, ε) to be any minimum cardinality subset X ⊆ P which ε-approximates Convex(P ). Blum et al. [2016] gave (α, β)-approximation for the following. Problem 3.2. Given a set P of n points in Rm, and a value ε > 0, compute optconvex(P, ε). Note the proofs of correctness and approximation quality from Blum et al. [2016] for Problem 3.2 do not immediately imply the same results for using Algorithm 1 for Problem 3.1. To see this, consider any points u, v on S(m−1). Note the angle between u and v is the same as their geodesic distance on S(m−1). Intuitively, we want to claim the geodesic distance between u and v is roughly the same as the Euclidean distance between gpq(u) and gpq(v). While this is true for points near q, as we move away from q the correspondence breaks down (and is unbounded as you approach π/2). This non-uniform distortion requires care, and thus the proofs had to be moved to the full version. Finally, observe that Algorithm 1, requires being able to compute the point furthest from the convex hull. To do so we use the (convex) approximate Carathéodory, which is both theoretically and practically very efficient, and produces provably sparse solutions. As a stand alone result, we first prove the conic analog of the approximate Carathéodory theorem. This result is of independent interest since it can be used to sparsify the returned solution from Algorithm 1, or any other algorithm. 3.1 Sparsity and the Approximate Conic Carathéodory Theorem Our first result is a conic approximate Carathéodory theorem. That is, given a point set P ⊆ Rm and a query point q, then the angularly closest point to q in Conic(P ) can be approximately expressed as q x x′hplane(q) a sparse combination of point from P . More precisely, one can compute a point t which is a conic combination of O(1/ε2) points from P such that angle(q, t) ≤ angle(q,Conic(P )) + εφP . The significance of this result is as follows. Recall that we seek a factorization X ≈ BC, where the k columns of B are a subset of those from X and the entries of C are non-negative. Ideally each point in X is expressed as a sparse combination from the basis B, that is each column of C has very few non-zero entries. So suppose we are given any factorization BC, but C is dense. Then no problem, just throw out C, and use our Carathéodory theorem to compute a new matrix C ′ with sparse columns. Namely treat each column of X as the query q and run the theorem for the point set P = B, and then the non-zero entries of corresponding column of C ′ are just the selected combination from B. Not only does this mean we can sparsify any solution to our NMF problem (including those obtained by other methods), but it also means conceptually that rather than finding a good pair BC, one only needs to focus on finding the subset B, as is done in Algorithm 1. Note that Algorithm 1 allows non-negative inputs in P because φP < π/2 ensures P can be rotated into the positive orthant. While it appears the conic approximate Carathéodory theorem had not previously been stated, the convex version has a long history (e.g., implied by [Novikoff, 1962]). The algorithm to compute this sparse convex approximation is again a simple and fast greedy algorithm, which roughly speaking is a simplification of the Frank-Wolfe algorithm for this particular problem. Specifically, to find the projection of q onto Convex(P ), start with any point t0 ∈ Convex(P ). In the ith round, find the point pi ∈ P most extreme in the direction of q from ti−1 (i.e., maximizing 〈q − ti−1, pi〉) and set ti to be the closest point to q on the segment ti−1pi (thus simplifying Frank Wolfe, as we ignore step size issues). The standard analysis of this algorithm (e.g., [Blum et al., 2016]) gives the following. Theorem 3.3 (Convex Carathéodory). For a point set P ⊆ Rm, ε > 0, and q ∈ Rm, one can compute, in O ( |P |m/ε2 ) time, a point t ∈ Convex(P ), such that d(q, t) ≤ d(q,Convex(P )) + ε∆, where ∆ = ∆P . Furthermore, t is a convex combination of O(1/ε2) points of P . Again by exploiting properties of the gnomonic projection we are able to prove a conic analog of the above theorem. Note for P ⊂ Rm, P is contained in the linear span of at most m points from P , and similarly the exact Carathéodory theorem states any point q ∈ Convex(P ) is expressible as a convex combination of at most m+ 1 points from P . As the conic hull lies between the linear case (with all combinations) and the convex case (with non-negative combinations summing to one), it is not surprising an exact conic Carathéodory theorem holds. However, the linear analog of the approximate convex Caratheodory theorem does not hold, and so the following conic result is not a priori obvious. Theorem 3.4. Let P ⊂ Rm be a point set, let q be such that φP (q) < π/2− γ for some constant γ > 0, and let ε > 0 be a parameter. Then one can find, in O(|P |m/ε2) time, a point t ∈ Conic(P ) such that angle(q, t) ≤ angle(q,Conic(P ))+εφP (q). Moreover, t is a conic combination ofO(1/ε2) points from P . Due to space constraints, the detailed proof of Theorem 3.4 appears in the full version. In the proof, the dependence on γ is made clear but we make a remark about it here. If ε is kept fixed, γ shows up in the running time roughly by a factor of tan2(π/2− γ). Alternatively, if the running time is fixed, the approximation error will roughly depend on the factor 1/ tan(π/2− γ). We now give a simple example of a high dimensional point set which shows our bounded angle assumption is required for the conic Carathéodory theorem to hold. Let P consist of the standard basis vectors in Rm, let q be the all ones vector, and let ε be a parameter. Let X be a subset of P of size k, and consider aproj(q) = aproj(q,X). As P consists of basis vectors, each of which have all but one entry set to zero, aproj(q) will have at most k non-zero entries. By the symmetry of q it is also clear that all non-zero entries in aproj(q) should have the same value. Without loss of generality assume that this value is 1, and hence the magnitude of aproj(q) is √ k. Thus for aproj(q) to be an ε-approximation to q, angle(aproj(q), q) = cos−1( k√ k √ m ) = cos−1( √ k/m) < ε. Hence for a fixed ε, the number of points required to ε-approximate q depends on m, while the conic Carathéodory theorem should be independent of m. 3.2 Approximating the Conic Hull We now prove that Algorithm 1 yields an approximation to the conic hull of a given point set and hence an approximation to the nonnegative matrix factorization problem. As discussed above, previously Blum et al. [2016] provided the following (α, β)-approximation for Problem 3.2. Theorem 3.5 ([Blum et al., 2016]). For a set P of n points in Rm, and ε > 0, the greedy strategy, which iteratively adds the point furthest from the current convex hull, gives a ((8ε1/3 + ε)∆, O(1/ε2/3))-approximation to Problem 3.2, and has running time O(nc(m + c/ε2 + c2)) time, where c = O(kopt/ε2/3). Our second result, is a conic analog of the above theorem. Theorem 3.6. Given a set P of n points in Rm such that φP ≤ π2 − γ for a constant γ > 0, and a value ε > 0, Algorithm 1 gives an ((8ε1/3 + ε)φP , O(1/ε2/3))-approximation to Problem 3.1, and has running time O(nc(m+ c/ε2 + c2)), where c = O(kopt/ε2/3). Bounding the approximation error requires carefully handling the distortion due to the gnomonic project, and the details are presented in the full version. Additionally, Blum et al. [2016] provide other (α, β)-approximations, for different values of α and β, and in the full version these other results are also shown to hold for the conic case. 4 Hardness of the Convex and Conic Problems This section gives a reduction from d-SUM to the convex approximation of Problem 3.2, implying it is d-SUM-hard. In the full version a similar setup is used to argue the conic approximation of Problem 3.1 is d-SUM-hard. Actually if Problem 3.1 allowed instances where φP = π/2 the reduction would be virtually the same. However, arguing that the problem remains hard under our requirement that φP ≤ π/2− γ, is non-trivial and some of the calculations become challenging and lengthy. The reductions to both problems are partly inspired by Arora et al. [2016]. However, here, we use the somewhat non-standard version of d-SUM where repetitions are allowed as described below. Problem 4.1 (d-SUM). In the d-SUM problem we are given a set S = {s1, s2, · · · , sN} of N values, each in the interval [0, 1], and the goal is to determine if there is a set of d numbers (not necessarily distinct) whose sum is exactly d/2. It was shown by Patrascu and Williams [2010] that if d-SUM can be solved in No(d) time then 3-SAT has a sub-exponential time algorithm, i.e., that the Exponential Time Hypothesis is false. Theorem 4.2 (d-SUM-hard). Let d < N0.99, δ < 1. If d-SUM on N numbers of O(d log(N)) bits can be solved in O(Nδd) time, then 3-SAT on n variables can be solved in 2o(n) time. We will prove the following decision version of Problem 3.2 is d-SUM-hard. Note in this section the dimension will be denoted by d rather than m, as this is standard for d-SUM reductions. Problem 4.3. Given a set P of n points in Rd, a value ε > 0, and an integer k, is there a subset X ⊆ P of k points such that dconvex(X,P ) ≤ ε∆, where ∆ is the diameter of P . Given an instance of d-SUM with N values S = {s1, s2, · · · , sN} we construct an instance of Problem 4.3 where P ⊂ Rd+2, k = d, and ε = 1/3 (or any sufficiently small value). The idea is to create d clusters each containing N points corresponding to a choice of one of the si values. The clusters are positioned such that exactly one point from each cluster must be chosen. The d + 2 coordinates are labeled ai for i ∈ [d], w, and v. Together, a1, · · · , ad determine the cluster. The w dimension is used to compute the sum of the chosen si values. The v dimension is used as a threshold to determine whether d-SUM is a yes or no instance to Problem 4.3. Let w(pj) denote the w value of an arbitrary point pj . We assume d ≥ 2 as d-SUM is trivial for d = 1. Let e1, e2, · · · , ed ∈ Rd be the standard basis in Rd, e1 = (1, · · · , 0), e2 = (0, 1, · · · , 0), . . . , and ed = (0, · · · , 1). Together they form the unit d-simplex, and they define the d clusters in the construction. Finally, let ∆∗ = √ 2 + (εsmax − εsmin)2 be a constant where smax and smin are, respectively, the maximum and minimum values in S. Definition 4.4. The set of points P ⊂ Rd+2 are the following pij points: For each i ∈ [d], j ∈ [N ], set (a1, · · · , ad) = ei, w = εsj and v = 0 q point: For each i ∈ [d], ai = 1/d, w = ε/2, v = 0 q′ point: For each i ∈ [d], ai = 1/d and w = ε/2, v = ε∆∗ Lemma 4.5 (Proof in full version). The diameter of P , ∆P , is equal to ∆∗. We prove completeness and soundness of the reduction. Below P i = ∪j pij denotes the ith cluster. Observation 4.6. If maxp∈P d(p,Convex(X)) ≤ ε∆, then dconvex(X,P ) ≤ ε∆: For point sets A and B = {b1, . . . , bm}, if we fix a ∈ Convex(A), then for any b ∈ Convex(B) we have ||a− b|| = ||a−∑i αibi|| = ||∑i αi(a− bi)|| ≤∑i αi||a− bi|| ≤ maxi ||a− bi||. Lemma 4.7 (Completeness). If there is a subset {sk1 , sk2 , · · · , skd} of d values (not necessarily distinct) such that ∑ i∈[d] ski = d/2, then the above described instance of Problem 4.3 is a true instance, i.e. there is a d sized subset X ⊆ P with dconvex(X,P ) ≤ ε∆. Proof: For each value ski consider the point xi = (ei, ε · ski , 0), which by Definition 4.4 is a point in P . Let X = {x1, . . . , xd}. We now prove maxp∈P d(p,Convex(X)) ≤ ε∆, which by Observation 4.6 implies that dconvex(X,P ) ≤ ε∆. First observe that for any pij in P , d(p i j , xi) = √ (w(pij)− w(xi))2 ≤ |εsj − εski | ≤ ε∆. The only other points in P are q and q′. Note that d(q, q′) = ε∆∗ = ε∆ from Lemma 4.5. Thus if we can prove that q ∈ Convex(X) then we will have shown maxp∈P d(p,Convex(X)) ≤ ε∆. Specifically, we prove that the convex combination x = 1d ∑d i xi is the point q. As X contains exactly one point from each set P i, and in each such set all points have ai = 1 and all other aj = 0, it holds that x has 1/d for all the a coordinates. All points in X have v = 0 and so this holds for x as well. Thus we only need to verify that w(x) = w(q) = ε/2, for which we have w(x) = 1d ∑ i w(xi) = 1 d ∑ i εski = 1 d (εd/2) = ε/2. Proving soundness requires some helper lemmas. Note that in the above proof we constructed a solution to Problem 4.3 that selected exactly one point from each cluster P i. We now prove that this is a required property. Lemma 4.8 (Proof in full version). Let P ⊂ Rd+2 be as defined above, and let X ⊆ P be a subset of size d. If dconvex(X,P ) ≤ ε∆, then for all i, X contains exactly one point from P i. Lemma 4.9 (Proof in full version). If dconvex(X,P ) ≤ ε∆, then q ∈ Convex(X) and moreover q = 1d ∑ xi∈X xi. Lemma 4.10 (Soundness). Let P be an instance of Problem 4.3 generated from a d-SUM instance S, as described in Definition 4.4. If there is a subset X ⊆ P of size d such that dconvex(X,P ) ≤ ε∆, then there is a choice of d values from S that sum to exactly d/2. Proof: From Lemma 4.8 we know that X consist of exactly one point from each cluster P i. Thus for each xi ∈ X , w(xi) = εski for some ski ∈ S. By Lemma 4.9, q = 1d ∑ i xi, which implies w(q) = 1d ∑ i w(xi). By Definition 4.4w(q) = ε/2, which implies ε/2 = 1 d ∑ i w(xi) = 1 d ∑ i εski . Thus we have a set {sk1 , . . . , skd} of d values from S such that ∑ i ski = d/2. Lemma 4.7 and Lemma 4.10 immediately imply the following. Theorem 4.11. For point sets in Rd+2, Problem 4.3 is d-SUM-hard. 5 Experimental Results We report an experimental comparison of the proposed greedy algorithm for conic hulls, the greedy algorithm for convex hulls (the conic hull algorithm without the projection step) [Blum et al., 2016], the X-RAY (max) algorithm [Kumar et al., 2013], a modified version of X-RAY, dubbed mutant X-RAY, which simply selects the point furthest away from the current cone (i.e., with the largest residual), and a γ-shifted version of the conic hull algorithm described below. Other methods such as Hottopixx [Recht et al., 2012, Gillis and Luce, 2014] and SPA [Gillis and Vavasis, 2014] were not included due to their similar performance to the above methods. For our experiments, we considered the performance of each of the methods when used to select features for a variety of SVM classification tasks on various image, text, and speech data sets including several from the Arizona State University feature selection repository [Li et al., 2016] as well as the UCI Reuters dataset and the BBC News dataset [Greene and Cunningham, 2006]. The Reuters and BBC text datasets are represented using the TF-IDF representation. For the Reuters dataset, only the ten most frequent topics were used for classification. In all datasets, columns (corresponding to features) that were identically equal to zero were removed from the data matrix. For each problem, the data is divided using a 30/70 train/test split, the features are selected by the indicated method, and then an SVM classifier is trained using only the selected features. For the conic and convex hull methods, is set to 0.1. The accuracy (percent of correctly classified instances) is plotted versus the number of selected features for each method in Figure 4.1. Additional experimental results can be found in the full version. Generally speaking, the convex, mutant X-RAY, and shifted conic algorithms seem to consistently perform the best on the tasks. The difference in performance between convex and conic is most striking on the two text data sets Reuters and BBC. In the case of BBC and Reuters, this is likely due to the fact that many of the columns of the TF-IDF matrix are orthogonal. We note that the quality of both X-RAY and conic is improved if thresholding is used when constructing the feature matrix, but they still seem to under perform the convex method for text datasets. The text datasets are also interesting as not only do they violate the explicit assumption in our theorems that the angular diameter of the conic hull be strictly less than π/2, but that there are many such mutually orthogonal columns of the document-feature matrix. This observation motivates the γ-shifted version of the conic hull algorithm that simply takes the input matrix X and adds γ to all of the entries (essentially translating the data along the all ones vector) and then applies the conic hull algorithm. Let 1a,b denote the a × b matrix of ones. After a nonnegative shift, the angular assumption is satisfied, and the restricted NMF problem is that of approximating (X + γ1m,n) as (B + γ1m,k)C, where the columns of B are again chosen from those of X . Under the Frobenus norm ||(X + γ1m,n)− (B + γ1m,k)C||22 = ∑ i,j(Xij −Bi,:C:,j + γ(1− ||C:,j ||1))2. As C must be a nonnegative matrix, the shifted conic case acts like the original conic case plus a penalty that encourages the columns of C to sum to one (i.e., it is a hybrid between the conic case and the convex case). The plots illustrate the performance of the γ-shifted conic hull algorithm for γ = 10. After the shift, the performance more closely matches that of the convex and mutant X-RAY methods on TF-IDF features. Given these experimental results and the simplicity of the proposed convex and conic methods, we suggest that both methods should be added to practitioners’ toolboxes. In particular, the superior performance of the convex algorithm on text datasets, compared to X-RAY and the conic algorithm, seems to suggest that these types of “convex” factorizations may be more desirable for TF-IDF features. Acknowledgments Greg Van Buskirk and Ben Raichel were partially supported by NSF CRII Award-1566137. Nicholas Ruozzi was partially supported by DARPA Explainable Artificial Intelligence Program under contract number N66001-17- 2-4032 and NSF grant III-1527312
1. What is the main contribution of the paper regarding the approximation algorithm for finding a small subset of columns that approximates the conic hull of X? 2. What are the strengths and weaknesses of the proposed algorithm compared to prior works, particularly in terms of the analysis and transformation from gnomonic projection? 3. Do you have any concerns or questions regarding the paper's proofs, such as the immediacy of the proofs for the conic case from the convex case? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, particularly in terms of citing earlier works and switching between different cases? 5. Are there any specific details or explanations that the reviewer would like to know more about, such as the optimization problem solved to project points onto the current convex hull at step k? 6. How does the reviewer evaluate the relevance and correctness of the proposed algorithm, particularly in recovering the set of columns S if the data is generated using conic combinations of X?
Review
Review The paper presents a greedy approximation algorithm for finding a small subset S of columns of X such that conic hull of S approximates the conic hull of X. If k is the smallest number of columns to get the e -approximation, the algorithm produces O(k/e^2/3) columns that give O(e^1/3) approximation. The algorithm is heavily inspired from an earlier work [16] that produces same approximation for convex hull problem. Authors transform conic hull problem to that of convex hull using gnomonic projection (scaling the points so that they lie on a suitable hyperplane at unit distance from the origin). The main contribution is claimed to be the analysis of this which authors say is not immediate from the analysis in [16]. Apart from this, the paper also proves an approximate Caratheodory theorem for conic hulls, and shows that both convex and conic hull versions are d-SUM-hard. Overall, the paper shows that same approximation results hold for conic case as shown earlier for the convex case in [16]. I have not gone through all the proofs in the appendix so cannot comment on the immediacy of the proofs for the conic case from the convex case [16]. Approx Caratheodory theorem (thm 3.4) for conic seems to be not so difficult to obtain from the convex case though, given the monotonicity of the distortion on the hyperplane as a function of distortion on the sphere. Here are my other comments: 1. The paper should also cite earlier work on approximate solution for the conic hull problem where the quality is measured using some other metrics, eg. "Robust Near-Separable Nonnegative Matrix Factorization Using Linear Optimization", Gillis and Luce, 2015). 2. The paper keeps switching b/w the case when X is nonnegative matrix and the case when X is allowed negative entries. For ex, lines 41-43 talk about the nonnegative X case, whereas Algorithm 1 seems to be taking about general X (P can have points anywhere in R^m). 3. Algorithm 1: for P anywhere in R^m, the algorithm takes q as any vector in the space. However I think q should be in the dual cone of conic(P) to make the algorithm work. For nonnegative data, the dual cone is positive orthant itself. 4. Lines 148-150: the statement about earlier work [11] is not right -- they also allow any vector q in the positive orthant. See Algorithm 1 in [11] "detection step" and "Remarks (3)" in the same paper. 5. What optimization problem is solved to project points onto the current convex hull at step k? I didn't see the paper talking about this. 6. How is mutant X-RAY (line 316) is related to Algorithm 1 in the paper? Essentially if denominator (p^T X_j) is removed in the "detection step" in [11] and "max" variant is used, this is "mutant X-RAY" as called in the paper. 7. Will Algorithm 1 correctly recover the set of columns S if the data is generated using conic combinations of X, i.e., for the case when X = conic(S)? It doesn't look like so. A formal proof or comment would be good.
NIPS
Title An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders Abstract When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically. 1 Introduction We study the finite-sum composite optimization problem min x∈Rd F (x) + r(x) and F (x) = 1 n n∑ i=1 fi(x). (1) where each fi(x) is differentiable and convex, and the regularization function r(x) is convex but not necessarily differentiable. This formulation arises in many problems in machine learning [34, 39, 14], distributed optimization [20, 3, 19], and signal processing [4, 9]. The leading methods to solve (1) are first-order algorithms such as stochastic gradient descent (SGD) [28, 2] and stochastic variance-reduced methods [14, 6, 7, 17, 10, 32]. In the implementation of ∗Equal Contribution. Correspondence to: Kun Yuan 35th Conference on Neural Information Processing Systems (NeurIPS 2021). these methods, each fi(x) can be sampled either with or without replacement. Without-replacement sampling draws each fi(x) exactly once during an epoch, which is numerically faster than withreplacement sampling and more cache-friendly; see the experiments in [1, 38, 11, 7, 37, 5]. This has triggered significant interests in understanding the theory behind without-replacement sampling. Among the most popular without-replacement approaches are cyclic sampling, random reshuffling, and shuffling-once. Cyclic sampling draws the samples in a cyclic order. Random reshuffling reorders the samples at the beginning of each sample epoch. The third approach, however, shuffles data only once before the training begins. Without-replacement sampling have been extensively studied for SGD. It was established in [1, 38, 11, 22, 24] that without-replacement sampling enables SGD with faster convergence For example, it was proved that without-replacement sampling can speed up uniform-iid-sampling SGD from Õ(1/k) to Õ(1/k2) (where k is the iteration) for strongly-convex costs in [11, 12], and O(1/k1/2) to O(1/k) for the convex costs in [24, 22]. [31] establishes a tight lower bound for random reshuffling SGD. Recent works [27, 22] close the gap between upper and lower bounds. Authors of [22] also analyzes without-replacement SGD with non-convex costs. In contrast to the mature results in SGD, variance-reduction under without-replacement sampling are less understood. Variance reduction strategies construct stochastic gradient estimators with vanishing gradient variance, which allows for much larger learning rate and hence speed up training process. Variance reduction under without-replacement sampling is difficult to analyze. In the strongly convex scenario, [37, 33] provide linear convergence guarantees for SVRG/SAGA with random reshuffling, but the rates are worse than full-batch gradient descent (GD). Authors of [35, 23] improved the rate so that it can match with GD. In convex scenario, existing rates for without-replacement sampling with variance reduction, except for the rate established in an independent and concurrent work [18], are still far worse than GD [33, 5], see Table 1. Furthermore, no existing rates for variance reduction under without-replacement sampling orders, in either convex or strongly convex scenarios, can match those under uniform-iid-sampling which are essentially sample-size independent. There is a clear gap between the known convergence rates and superior practical performance for without-replacement sampling with variance reduction. 1.1 Main results This paper narrows such gap by providing convergence analysis and rates for proximal DFinito, a proximal damped variant of Finito/MISO [7, 17, 26], which is a well-known variance reduction algorithm, under without-replacement sampling orders. Our main achieved results are: • We develop a proximal damped variant of Finito/MISO called Prox-DFinito and establish its gradient complexities with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. All these rates match with gradient descent, and are state-of-the-art (up to logarithm factors) compared to existing results for without-replacement sampling with variance-reduction, see Table 1. • Our novel analysis can gauge how a cyclic order will influence the rate of Prox-DFinito with cyclic sampling. This allows us to identify the optimal cyclic sampling ordering. ProxDFinito with optimal cyclic sampling, in the highly data-heterogeneous scenario, can attain a sample-size-independent convergence rate, which is the first result, to our knowledge, that can match with uniform-iid-sampling with variance reduction in certain scenarios. We also propose a numerical method to discover the optimal cyclic ordering cheaply. 1.2 Other related works Our analysis on cyclic sampling is novel. Most existing analyses unify random reshuffling and cyclic sampling into the same framework; see the SGD analysis in [11], the variance-reduction analysis in [10, 36, 23, 37], and the coordinate-update analysis in [5]. These analyses are primarily based on the “sampled-once-per-epoch” property and do not analyze the orders within each epoch, so they do not distinguish cyclic sampling from random reshuffling in analysis. [16] finds that random reshuffling SGD is basically the average over all cyclic sampling trials. This implies cyclic sampling can outperform random reshuffling with a well-designed sampling order. However, [16] does not discuss how much better cyclic sampling can outperform random reshuffling and how to achieve such cyclic order. Different from existing literatures, our analysis introduces an order-specific norm to gauge how cyclic sampling performs with different fixed orders. With such norm, we are able to clarify the worst-case and best-case performance of variance reduction with cyclic sampling. Simultaneously and independently, a recent work [18] also provided an improved rates for variance reduction under without-replacement sampling orders that can match with gradient descent. However, [18] does not discuss whether and when variance reduction with replacement sampling can match with uniform sampling. In addition, [18] studies SVRG while this paper studies Finito/MISO. The convergence analyses in these two works are very different. The detailed comparison between this work and [18] can be referred to Sec. 3.3. 1.3 Notations Throughout the paper we let col{x1, · · · , xn} denote a column vector formed by stacking x1, · · · , xn. We let [n] := {1, · · · , n} and define the proximal operator as proxαr(x) := arg min y∈Rd {α r(y) + 1 2 ‖y − x‖2} (2) which is single-valued when r is convex, closed and proper. In general, we say A is an operator and write A : X → Y if A maps each point in space X to another space Y . So A(x) ∈ Y for all x ∈ X . For simplicity, we write Ax = A(x) and A ◦ Bx = A(B(x)) for operator composition. Cyclic sampling. We define π := (π(1), π(2), . . . , π(n)) as an arbitrary determined permutation of sample indexes. The order π is fixed throughout the entire learning process under cyclic sampling. Random reshuffling. When starting each epoch, a random permutation τ := (τ(1), τ(2), ..., τ(n)) is generated to specify the order to take samples. Let τk denote the permutation of the k-th epoch. Algorithm 1 Prox-DFinito Input: z̄0 = 1n n∑ i=1 z0i , step-size α, and θ ∈ (0, 1); for epoch k = 0, 1, 2, · · · do for iteration t = kn+ 1, kn+ 2, · · · , (k + 1)n do xt−1 = proxαr(z̄ t−1); Pick it with some rule; Update ztit and z̄ t according to (4a) and (5); end for z (k+1)n i ← (1− θ)zkni + θz (k+1)n i for any i ∈ [n]; . a damping step z̄(k+1)n ← (1− θ)z̄kn + θz̄(k+1)n; . a damping step end for 2 Proximal Finito/MISO with Damping The proximal gradient method to solve problem (1) is zti = x t−1 − α∇fi(xt−1), ∀ i ∈ [n] (3a) xt = proxαr ( 1 n n∑ i=1 zti ) (3b) To avoid the global average that passes over all samples, we propose to update one zi per iteration: zti = { xt−1 − α∇fi(xt−1), i = it zt−1i , i 6= it (4a) xt = proxαr ( 1 n n∑ i=1 zti ) . (4b) When it is invoked with uniform-iid-sampling and r(x) = 0, algorithm (4a)–(4b) reduces to Finito/MISO [7, 17]. When it is invoked with cyclic sampling and r(x) = 0, algorithm (4a)–(4b) reduces to DIAG [23] and WPG [19]. We let z̄t := 1n ∑n i=1 z t i . The update (4a) yields z̄t = z̄t−1 + (ztit − z t−1 it )/n. (5) This update can be finished with O(d) operations if {zti}ni=1 are stored with O(nd) memory. Furthermore, to increase robustness and simplify the convergence analysis, we impose a damping step to zi and z̄ when each epoch finishes. The proximal damped Finito/MISO method is listed in Algorithm 1. Note that the damping step does not incur additional memory requirements. A more practical implementation of Algorithm 1 is referred to Algorithm 3 in Appendix A. 2.1 Fixed-point recursion reformulation Algorithm (4a)–(4b) can be reformulated into a fixed-point recursion in {zi}ni=1. Such a fixed-point recursion will be utilized throughout the paper. To proceed, we define z = col{z1, · · · , zn} ∈ Rnd and introduce the average operator A : Rnd → Rd as Az = 1n ∑n i=1 zi. We further define the i-th block coordinate operator Ti : Rnd → Rnd as Tiz=col{z1, · · · , (I−α∇fi) ◦ proxαr(Az), · · · , zn} where I denotes the identity mapping. When applying Ti, it is noted that the i-th block coordinate in z is updated while the others remain unchanged. Proposition 1. Prox-DFinito with fixed cyclic sampling order π is equivalent to the following fixed-point recursion (see proof in Appendix B.1.) z(k+1)n = (1− θ)zkn + θTπzkn (6) where Tπ = Tπ(n) ◦ · · · ◦ Tπ(1). Furthermore, variable xt can be recovered by xt = proxαr ◦ Azt, t = 0, 1, 2, · · · (7) Similar result also hold for random reshuffling scenario. Proposition 2. Prox-DFinito with random reshuffling is equivalent to z(k+1)n = (1− θ)zkn + θTτkzkn (8) where Tτk = Tτk(n) ◦ · · · ◦ Tτk(1). Furthermore, variable xt can be recovered by following (7). 2.2 Optimality condition Assume there exists x? that minimizes F (x) + r(x), i.e., 0 ∈ ∇F (x?) + ∂ r(x?). Then the relation between the minimizer x? and the fixed-point z? of recursion (6) and (8) can be characterized as: Proposition 3. x? minimizes F (x)+r(x) if and only if there is z? so that (proof in Appendix B.2) z? = Tiz?, ∀ i ∈ [n], (9) x? = proxαr ◦ Az?. (10) Remark 1. If x? minimizes F (x) + r(x), it holds from (9) and (10) that z?i = (I−α∇fi) ◦ proxαr(Az?) = x? − α∇fi(x?) for any i ∈ [n]. 2.3 An order-specific norm To gauge the influence of different sampling orders, we now introduce an order-specific norm. Definition 1. Given z = col{z1, · · · , zn} ∈ Rnd and a fixed cyclic order π, we define ‖z‖2π = n∑ i=1 i n ‖zπ(i)‖2 = 1 n ‖zπ(1)‖2 + 2 n ‖zπ(2)‖2 + · · ·+ ‖zπ(n)‖2 as the π-specific norm. For two different cyclic orders π and π′, it generally holds that ‖z‖2π 6= ‖z‖2π′ . Note that the coefficients in ‖z‖2π are delicately designed for technical reasons (see Lemma 1 and its proof in the appendix). The order-specific norm facilitates the performance comparison between two orderings. 3 Convergence Analysis In this section we establish the convergence rate of Prox-DFinito with cyclic sampling and random reshuffling in convex and strongly convex scenarios, respectively. 3.1 The convex scenario We first study the convex scenario under the following assumption: Assumption 1 (Convex). Each function fi(x) is convex and L-smooth. It is worth noting that the convergence results on cyclic sampling and random reshuffling for the convex scenario are quite limited except for [22, 33, 5, 18]. Cyclic sampling and shuffling-once. We first introduce the following lemma showing that Tπ is non-expansive with respect to ‖ · ‖π , which is fundamental to the convergence analysis. Lemma 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.1) ‖Tπu− Tπv‖2π ≤ ‖u− v‖2π, ∀u,v ∈ Rnd. (11) Recall (6) that the sequence zkn is generated through z(k+1)n = Sπz(kn). Since Sπ = (1−θ)I+θTπ and Tπ is non-expansive, we can prove the distance ‖z(k+1)n−z(kn)‖2 will converge to 0 sublinearly: Lemma 2. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds for any k = 0, 1, · · · that (see proof in Appendix ‖z(k+1)n − zkn‖2π ≤ θ (k + 1)(1− θ) ‖z0 − z?‖2π (12) where θ ∈ (0, 1) is the damping parameter. With Lemma 2 and the relation between xt and zt in (7), we can establish the convergence rate: Theorem 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.3) min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (13) where θ ∈ (0, 1) and C = ( 2 αL )2 log(n)+1 n ‖z 0 − z?‖2π . Remark 2. Inspired by reference [16], one can take expectation over cyclic order π in (13) to obtain the convergence rate of Prox-DFinito shuffled once before training begins (with C =( 2 αL )2 (n+1)(log(n)+1) 2n2 ‖z 0 − z?‖2): E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (14) Random reshuffling. We let τk denote the sampling order used in the k-th epoch. Apparently, τk is a uniformly distributed random variable with n! realizations. With the similar analysis technique, we can also establish the convergence rate under random reshuffling in the expectation sense. Theorem 2. Under Assumption 1, if step-size 0 < α ≤ 2L and data is sampled with random reshuffling, it holds that (see proof in Appendix D.2) E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (15) where θ ∈ (0, 1) and C = ( 5 3αL )2 1 n‖z 0 − z?‖2. Comparing (15) with (13), it is observed that random reshuffling replaces the constant ‖z0 − z?‖2π by ‖z0 − z?‖2 and removes the log(n) term in the upper bound. 3.2 The strongly convex scenario In this subsection, we study the convergence rate of Prox-DFinito under the following assumption: Assumption 2 (Strongly Convex). Each function fi(x) is µ-strongly convex and L-smooth. Theorem 3. Under Assumption 2, if step-size 0 < α ≤ 2µ+L , it holds that (see proof in Appendix E) (E) ‖xkn − x?‖2 ≤ ( 1− 2θαµL µ+ L )k C (16) where θ ∈ (0, 1) and C= { log(n)+1 n ‖z 0−z?‖2π with π-order cyclic sampling, 1 n‖z 0 − z?‖2 with random reshuffling. Remark 3. Note when θ → 1, Prox-DFinito actually reaches the best performance, so damping is essentially not necessary in strongly convex scenario. 3.3 Comparison with the existing results Recalling ‖z‖2π = ∑n i=1 i n‖zπ(i)‖ 2, it holds that 1 n ‖z‖2 ≤ ‖z‖2π ≤ ‖z‖2, ∀z, π. (17) For a fair comparison with existing works, we consider the worst case performance of cyclic sampling by relaxing ‖z0−z?‖2π to its upper bound ‖z0−z?‖2. Letting α = O(1/L), θ = 1/2 and assuming 1 n‖z 0 − z?‖2 = O(1), the convergence rates derived in Theorems 1–3 reduce to C-Cyclic = Õ ( L2/k ) , C-RR = O ( L2/k ) SC-Cyclic = Õ ( (1− 1/κ)k ) , SC-RR = O ( (1− 1/κ)k ) . where “C” denotes “convex” and “SC” denotes “strongly convex”, κ = L/µ, and Õ(·) hides the log(n) factor. Note that all rates are in the epoch-wise sense. These rates can be translated into the the gradient complexity (equivalent to sample complexity) of Prox-DFinito to reach an -accurate solution. The comparison with existing works are listed in Table 1. Different metrics. Except for [5] and our Prox-DFinito algorithm whose convergence analyses are based on the gradient norm in the convex and smooth scenario, results in other references are based on function value metric (i.e., objective error F (xkn)− F (x?)). The function value metric can imply the gradient norm metric, but not always vice versa. To comapre Prox-DFinito with other established results in the same metric, we have to transform the rates in other references into the gradient norm metric. The comparison is listed in Table 1. When the gradient norm metric is used, we observe that the rates of Prox-DFinito match that with gradient descent, and are state-of-the-art compared to the existing results. However, the rate of Prox-DFinito in terms of the function value is not known yet (this unknown rate may end up being worse than those of the other methods). For the non-smooth scenario, our metric ming∈∂r(x) ‖∇F (x) + ∂r(x)‖2 may not be bounded by the functional suboptimality F (x) + r(x)− F (x?)− r(x?), and hence Prox-DFinito results are not comparable with those in [21, 35, 37, 33, 18]. The results listed in Table 1 are all for the smooth scenario of [21, 35, 37, 33, 18], and we use “Support Prox” to indicate whether the results cover the non-smooth scenario or not. Assumption scope. Except for references [18, 35] and Proximal GD algorithm whose convergence analyses are conducted by assuming the average of each function to be L̄-smooth (and perhaps µ̄-strongly convex), results in other references are based on a stronger assumption that each summand function to be L-smooth (and perhaps µ-strongly convex). Note that L̄ can be much smaller than L sometimes. To compare [18, 35] and Proximal GD with other references under the same assumption, we let each L = L̄ in Table 1. However, it is worth noting that when each Li is drastically different from each other and can be evaluated precisely, the results relying on L̄ (e.g., [35] and [18]) can be much better than the results established in this work. Comparison with GD. It is observed from Table 1 that Prox-DFinito with cyclic sampling or random reshuffling is no-worse than Proximal GD. It is the first no-worse-than-GD result, besides the independent and concurrent work [18], that covers both the non-smooth and the convex scenarios for variance-reduction methods under without-replacement sampling orders. The pioneering work DIAG [23] established a similar result only for smooth and strongly-convex problems2. Comparison with RR/CS methods. Prox-DFinito achieves the nearly state-of-the-art gradient complexity in both convex and strongly convex scenarios (except for the convex and smooth case due to the weaker metric adopted) among known without-replacement stochastic approaches to solving the finite-sum optimization problem (1), see Table 1. In addition, it is worth noting that in Table 1, algorithms of [33, 35, 23] and our Prox-DFinito have an O(nd) memory requirement while others only need O(d) memory. In other words, Prox-DFinito is memory-costly in spite of its superior theoretical convergence rate and sample complexity. Comparison with uniform-iid-sampling methods. It is known that uniform-sampling variancereduction can achieve anO(max{n,L/µ} log(1/ )) sample complexity for strongly convex problems [14, 26, 6] and O(L2/ ) (when using metric E‖∇F (x)‖2) for convex problems [26]. In other words, these uniform-sampling methods have sample complexities that are independent of sample size n. Our achieved results (and other existing results listed in Table 1 and [18]) for random reshuffling or worstcase cyclic sampling cannot match with uniform-sampling yet. However, this paper establishes that Prox-DFinito with the optimal cyclic order, in the highly data-heterogeneous scenario, can achieve an Õ(L2/ ) sample complexity in the convex scenario, which matches with uniform-sampling up to a log(n) factor, see the detailed discussion in Sec. 4. To our best knowledge, it is the first result, at least in certain scenarios, that variance reduction under without-replacement sampling orders can match with its uniform-sampling counterpart in terms of their sample complexity upper bound. Nevertheless, it still remains unclear how to close the gap in sample complexity between variance reduction under without-replacement sampling and uniform sampling in the more general settings (i.e., settings other than highly data-heterogeneous scenario). 2While DIAG is established to outperform gradient descent in [23], we find its convergence rate is still on the same order of GD. Its superiority to GD comes from the constant improvement, not order improvement. 4 Optimal Cyclic Sampling Order Sec.3.3 examines the worst case gradient complexity of Prox-DFinito with cyclic sampling, which is worse than random reshuffling by a factor of log(n) in both convex and strongly convex scenarios. In this section we examine how Prox-DFinito performs with optimal cyclic sampling. 4.1 Optimal cyclic sampling Given sample size n, step-size α, epoch index k, and constants L, µ and θ, it is derived from Theorem 1 that the rate of π-order cyclic sampling is determined by constant ‖z0 − z?‖2π = n∑ i=1 i n ‖z0π(i) − z ? π(i)‖ 2. (18) We define the corresponding optimal cyclic order as follows. Definition 2. An optimal cyclic sampling order π? of Prox-DFinito is defined as π? := arg min π {‖z0 − z?‖2π}. (19) Such an optimal cyclic order can be identified as follows (see proof in Appendix F). Proposition 4. The optimal cyclic order for Prox-DFinito is the reverse order of {‖z0i − z?i ‖2}ni=1. Remark 4 (IMPORTANCE INDICATOR). Proposition 4 implies that ‖z0i − z?i ‖2 can be used as an importance indicator of sample i. Recall z?i = x ? − α∇fi(x?) from Remark 1. If z0i is initialized as 0, the importance indicator of sample i reduces to ‖x? − α∇fi(x?)‖2, which is determined by both x? and∇fi(x?). If z0i is initialized close to x?, we then have ‖z0i − z?i ‖2 ≈ α2‖∇fi(x?)‖2. In other words, the importance of sample i can be measured by ‖∇fi(x?)‖, which is consistent with the importance indicator in uniform-iid-sampling [41, 40]. 4.2 Optimal cyclic sampling can achieve sample-size-independent complexity Recall from Theorem 1 that the sample complexity of Prox-DFinito with cyclic sampling in the convex scenario is determined by (log(n)/n)‖z0 − z?‖2π . From (17) we have 1 n ‖z0 − z?‖2 ≤ ‖z0 − z?‖2π ≤ ‖z0 − z?‖2, ∀z, π. (20) In Sec. 3.3 we considered the worst case performance of cyclic sampling, i.e., we bound ‖z0 − z?‖2π with its upper bound ‖z0 − z?‖2. In this section, we will examine the best case performance using the lower bound ‖z0 − z?‖2/n, and provide a scenario in which such best case performance is achievable. We assume ‖z0 − z?‖2/n = O(1) as in previous sections. Proposition 5. Given fixed constants n, α, k, θ, L, and optimal cyclic order π?, if the condition ρ := ‖z0 − z?‖2π? ‖z0 − z?‖2 = O ( 1 n ) (21) holds, then Prox-DFinito with optimal cyclic sampling achieves sample complexity Õ(L2/ ). The above proposition can be proved by directly substituting (21) into Theorem 1. In the following, we discuss a data-heterogeneous scenario in which relation (21) holds. A data-heterogeneous scenario. To this end, we let x? = col{x?, · · · , x?} and ∇f(x?) = col{∇f1(x?), · · · ,∇fn(x?)}, it follows from Remark 1 that z? = x? − α∇f(x?). If we set z0 = 0 (which is common in the implementation) and α = 1/L (the theoretically suggested step-size), it then holds that ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2. Next, we assume ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2 = nβi−1 (0 < β < 1) holds. Under such assumption, the optimal cyclic order will be π? = (1, 2, · · · , n). Now we examine ‖z0 − z?‖2π? and ‖z0 − z?‖2: n∑ i=1 ‖z0i − z?i ‖2 = n n∑ i=1 βi−1 ≈ n 1− β , n∑ i=1 i n ‖z0i − z?i ‖2 = n∑ i=1 iβi−1 ≈ 1 (1− β)2 when n is large, which implies that ρ = ‖z0 − z?‖2π?/‖z0 − z?‖2 = O(1/n) since β is a constant independent of n. With Proposition 5, we know Prox-DFinito with optimal cyclic sampling can achieve Õ(L2/ ), which is independent of sample size n. Note that ‖∇fi(x?)‖2 = nβi−1 implies a data-heterogeneous scenario where β can roughly gauge the variety of data samples. 4.3 Adaptive importance reshuffling Algorithm 2 Adaptive Importance Reshuffling Initialize: w0(i) = ‖z0i − z̄0‖2 for i ∈ [n]; for epoch k = 0, 1, 2, · · · do Reshuffle [n] based on the vector wk; Update a Prox-DFinito epoch; Update wk+1 according to (22); end for The optimal cyclic order decided by Proposition 4 is not practical since the importance indicator of each sample depends on the unknown z?i = x?−α∇fi(x?). This problem can be overcome by replacing z?i by its estimate z kn i , which leads to an adaptive importance reshuffling strategy. We introduce w ∈ Rn as an importance indicating vector with each element wi indicating the importance of sample i and initialized as w0(i) = ‖z0i − z̄0‖2, ∀ i ∈ [n]. In the k-th epoch, we draw sample i earlier if wk(i) is larger. After the k-th epoch, w will be updated as wk+1(i) = (1− γ)wk(i) + γ‖z0i − z (k+1)n i ‖ 2, (22) where i ∈ [n] and γ ∈ (0, 1) is a fixed damping parameter. Suppose zkni → z?i , the above recursion will guarantee wk(i)→ ‖z0i − z?i ‖2. In other words, the order decided by wk will gradually adapt to the optimal cyclic order as k increases. Since the order decided by importance changes from epoch to epoch, we call this approach adaptive importance reshuffling and list it in Algorithm 2. We provide the convergence guarantees of the adaptive importance reshuffling method in Appendix G. 5 Numerical Experiments 5.1 Comparison with SVRG and SAGA under without-replacement sampling orders In this experiment, we compare DFinito with SVRG [14] and SAGA [7] under without-replacement sampling (RR, cyclic sampling). We consider a similar setting as in [18, Figure 2], where all step sizes are chosen as the theoretically optimal one, see Table 2 in Appendix H. We run experiments for the regularized logistic regression problem, i.e. problem (1) with fi(x) = log (1 + exp(−yi〈wi, x〉)) + λ 2 ‖x‖ 2 with three widely-used datasets: CIFAR-10 [15], MNIST [8], and COVTYPE [29]. This problem is L-smooth and µ-strongly convex with L = 14nλmax(W TW )+λ and µ = λ. From Figure 1, it is observed that DFinito outperforms SVRG and SAGA in terms of gradient complexity under without-replacement sampling orders with their best-known theoretical rates. The comparison with SVRG and SAGA with the practically optimal step sizes is in Appendix J. 5.2 DFinito with cyclic sampling Justification of the optimal cyclic sampling order. To justify the optimal cyclic sampling order π? suggested in Proposition 4, we test DFinito with eight arbitrarily-selected cyclic orders, and compare them with the optimal cyclic ordering π? as well as the adaptive importance reshuffling method (Algorithm 2). To make the comparison distinguishable, we construct a least square problem with heterogeneous data samples with n = 200, d = 50, L = 100, µ = 10−2 (see Appendix I for the constructed problem). The constructed problem is with ρ = ‖z0−z?‖2π∗/‖z0−z?‖22 = 0.006 when z0i = 0, x 0 = 0, and α = 13L , which is close to 1/n = 0.005. In the left plot in Fig. 2, it is observed that the optimal cyclic sampling achieves the fastest convergence rate. Furthermore, the adaptive shuffling method can match with the optimal cyclic ordering. These observations are consistent with our theoretical results derived in Sec. 4.2 and 4.3. Optimal cyclic sampling can achieve sample-size-independent complexity. It is established in [26] that Finito with uniform-iid-sampling can achieve n-independent gradient complexity with α = n8L . In this experiment, we compare DFinito (α = 2 L ) with Finito under uniform sampling (8 runs, α = n8L ) in a convex and highly heterogeneous scenario (ρ = O( 1 n )). The constructed problem is with n = 500, d = 20, L = 0.3, θ = 0.5 and ‖z0i −z?i ‖ = 10000∗0.1i−1, 1 ≤ i ≤ n (see detailed initialization in Appendix J). We also depict DFinito with random-reshuffling (8 runs) as another baseline. In the right plot of Figure 2, it is observed that the convergence curve of DFinito with π?-cyclic sampling matches with Finito with uniform sampling. This implies DFinito can achieve the same n-independent gradient complexity as Finito with uniform sampling. 5.3 More experiments We conduct more experiments in Appendix J. First, we compare DFinito with GD/SGD to justify its empirical superiority to these methods. Second, we validate how different data heterogeneity will influence optimal cyclic sampling. Third, we examine the performance of SVRG, SAGA, and DFinito under without/with-replacement sampling using grid-search (not theoretical) step sizes. 6 Conclusion and Discussion This paper develops Prox-DFinito and analyzes its convergence rate under without-replacement sampling in both convex and strongly convex scenarios. Our derived rates are state-of-the-art compared to existing results. In particular, this paper derives the best-case convergence rate for ProxDFinito with cyclic sampling, which can be sample-size-independent in the highly data-heterogeneous scenario. A future direction is to close the gap in gradient complexity between variance reduction under without-replacement and uniform-iid-sampling in the more general setting.
1. What is the focus of the paper, and what are the key contributions of the proposed algorithm? 2. What are the strengths of the paper, particularly in terms of its theoretical analysis? 3. Are there any concerns or limitations regarding the proposed approach? If so, what are they? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper develops a proximal damped version of the Finito algorithm. The algorithm is proved to achieve the same convergence rate as proximal GD, for cyclic sampling, random reshuffling and shuffling-once versions of the algorithm. Further, the authors claim that this is the first* shuffling based variance reduction algorithm to achieve the convergence rate. The paper also gives a new norm that captures the optimality of sampling orders and provides a heuristic based on that for importance based reshuffling. Besides the theoretical results, the empirical results seem to suggest that the proposed algorithm is indeed faster than other variance reduction algorithms. *: The authors cite a concurrent work (Malinovsky et al.) that also achieves the same convergence rates for general convex functions, but the algorithms in the two papers are different. Review This paper proposes Prox-DFinito, which is a shuffling based variance reduction algorithm. The theoretical results show that the cyclic sampling, random reshuffling and shuffling-once versions of the algorithm achieve the same convergence rate as GD (up to logarithmic factors) on general convex and strongly convex functions. The paper also proposes a new heuristic to get good sampling orders based on a new norm. Overall, the theoretical results look good and the empirical evaluation support the theory. A concern that I have is regarding the claim that the optimal cyclic sampling can achieve a sample complexity of O ~ ( L 2 / ϵ ) , which is independent of n . While this might indeed be correct, this might be slightly misleading. The fact is that without looking at all the n functions at least once, good convergence cannot be achieved - consider the case where a fraction of the functions have minima very far away from others. Hence, even to determine the optimal cyclic order, at least Ω ( n ) computation must be done. The authors should clarify this. The empirical evaluation for the optimal cycling sampling order seems to be done on an artificial dataset of quadratics that share the same minima. This does not give sufficient indication to whether the proposed optimal cyclic sampling order would work in practice. Can the authors provide evaluation on real datasets?
NIPS
Title An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders Abstract When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically. 1 Introduction We study the finite-sum composite optimization problem min x∈Rd F (x) + r(x) and F (x) = 1 n n∑ i=1 fi(x). (1) where each fi(x) is differentiable and convex, and the regularization function r(x) is convex but not necessarily differentiable. This formulation arises in many problems in machine learning [34, 39, 14], distributed optimization [20, 3, 19], and signal processing [4, 9]. The leading methods to solve (1) are first-order algorithms such as stochastic gradient descent (SGD) [28, 2] and stochastic variance-reduced methods [14, 6, 7, 17, 10, 32]. In the implementation of ∗Equal Contribution. Correspondence to: Kun Yuan 35th Conference on Neural Information Processing Systems (NeurIPS 2021). these methods, each fi(x) can be sampled either with or without replacement. Without-replacement sampling draws each fi(x) exactly once during an epoch, which is numerically faster than withreplacement sampling and more cache-friendly; see the experiments in [1, 38, 11, 7, 37, 5]. This has triggered significant interests in understanding the theory behind without-replacement sampling. Among the most popular without-replacement approaches are cyclic sampling, random reshuffling, and shuffling-once. Cyclic sampling draws the samples in a cyclic order. Random reshuffling reorders the samples at the beginning of each sample epoch. The third approach, however, shuffles data only once before the training begins. Without-replacement sampling have been extensively studied for SGD. It was established in [1, 38, 11, 22, 24] that without-replacement sampling enables SGD with faster convergence For example, it was proved that without-replacement sampling can speed up uniform-iid-sampling SGD from Õ(1/k) to Õ(1/k2) (where k is the iteration) for strongly-convex costs in [11, 12], and O(1/k1/2) to O(1/k) for the convex costs in [24, 22]. [31] establishes a tight lower bound for random reshuffling SGD. Recent works [27, 22] close the gap between upper and lower bounds. Authors of [22] also analyzes without-replacement SGD with non-convex costs. In contrast to the mature results in SGD, variance-reduction under without-replacement sampling are less understood. Variance reduction strategies construct stochastic gradient estimators with vanishing gradient variance, which allows for much larger learning rate and hence speed up training process. Variance reduction under without-replacement sampling is difficult to analyze. In the strongly convex scenario, [37, 33] provide linear convergence guarantees for SVRG/SAGA with random reshuffling, but the rates are worse than full-batch gradient descent (GD). Authors of [35, 23] improved the rate so that it can match with GD. In convex scenario, existing rates for without-replacement sampling with variance reduction, except for the rate established in an independent and concurrent work [18], are still far worse than GD [33, 5], see Table 1. Furthermore, no existing rates for variance reduction under without-replacement sampling orders, in either convex or strongly convex scenarios, can match those under uniform-iid-sampling which are essentially sample-size independent. There is a clear gap between the known convergence rates and superior practical performance for without-replacement sampling with variance reduction. 1.1 Main results This paper narrows such gap by providing convergence analysis and rates for proximal DFinito, a proximal damped variant of Finito/MISO [7, 17, 26], which is a well-known variance reduction algorithm, under without-replacement sampling orders. Our main achieved results are: • We develop a proximal damped variant of Finito/MISO called Prox-DFinito and establish its gradient complexities with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. All these rates match with gradient descent, and are state-of-the-art (up to logarithm factors) compared to existing results for without-replacement sampling with variance-reduction, see Table 1. • Our novel analysis can gauge how a cyclic order will influence the rate of Prox-DFinito with cyclic sampling. This allows us to identify the optimal cyclic sampling ordering. ProxDFinito with optimal cyclic sampling, in the highly data-heterogeneous scenario, can attain a sample-size-independent convergence rate, which is the first result, to our knowledge, that can match with uniform-iid-sampling with variance reduction in certain scenarios. We also propose a numerical method to discover the optimal cyclic ordering cheaply. 1.2 Other related works Our analysis on cyclic sampling is novel. Most existing analyses unify random reshuffling and cyclic sampling into the same framework; see the SGD analysis in [11], the variance-reduction analysis in [10, 36, 23, 37], and the coordinate-update analysis in [5]. These analyses are primarily based on the “sampled-once-per-epoch” property and do not analyze the orders within each epoch, so they do not distinguish cyclic sampling from random reshuffling in analysis. [16] finds that random reshuffling SGD is basically the average over all cyclic sampling trials. This implies cyclic sampling can outperform random reshuffling with a well-designed sampling order. However, [16] does not discuss how much better cyclic sampling can outperform random reshuffling and how to achieve such cyclic order. Different from existing literatures, our analysis introduces an order-specific norm to gauge how cyclic sampling performs with different fixed orders. With such norm, we are able to clarify the worst-case and best-case performance of variance reduction with cyclic sampling. Simultaneously and independently, a recent work [18] also provided an improved rates for variance reduction under without-replacement sampling orders that can match with gradient descent. However, [18] does not discuss whether and when variance reduction with replacement sampling can match with uniform sampling. In addition, [18] studies SVRG while this paper studies Finito/MISO. The convergence analyses in these two works are very different. The detailed comparison between this work and [18] can be referred to Sec. 3.3. 1.3 Notations Throughout the paper we let col{x1, · · · , xn} denote a column vector formed by stacking x1, · · · , xn. We let [n] := {1, · · · , n} and define the proximal operator as proxαr(x) := arg min y∈Rd {α r(y) + 1 2 ‖y − x‖2} (2) which is single-valued when r is convex, closed and proper. In general, we say A is an operator and write A : X → Y if A maps each point in space X to another space Y . So A(x) ∈ Y for all x ∈ X . For simplicity, we write Ax = A(x) and A ◦ Bx = A(B(x)) for operator composition. Cyclic sampling. We define π := (π(1), π(2), . . . , π(n)) as an arbitrary determined permutation of sample indexes. The order π is fixed throughout the entire learning process under cyclic sampling. Random reshuffling. When starting each epoch, a random permutation τ := (τ(1), τ(2), ..., τ(n)) is generated to specify the order to take samples. Let τk denote the permutation of the k-th epoch. Algorithm 1 Prox-DFinito Input: z̄0 = 1n n∑ i=1 z0i , step-size α, and θ ∈ (0, 1); for epoch k = 0, 1, 2, · · · do for iteration t = kn+ 1, kn+ 2, · · · , (k + 1)n do xt−1 = proxαr(z̄ t−1); Pick it with some rule; Update ztit and z̄ t according to (4a) and (5); end for z (k+1)n i ← (1− θ)zkni + θz (k+1)n i for any i ∈ [n]; . a damping step z̄(k+1)n ← (1− θ)z̄kn + θz̄(k+1)n; . a damping step end for 2 Proximal Finito/MISO with Damping The proximal gradient method to solve problem (1) is zti = x t−1 − α∇fi(xt−1), ∀ i ∈ [n] (3a) xt = proxαr ( 1 n n∑ i=1 zti ) (3b) To avoid the global average that passes over all samples, we propose to update one zi per iteration: zti = { xt−1 − α∇fi(xt−1), i = it zt−1i , i 6= it (4a) xt = proxαr ( 1 n n∑ i=1 zti ) . (4b) When it is invoked with uniform-iid-sampling and r(x) = 0, algorithm (4a)–(4b) reduces to Finito/MISO [7, 17]. When it is invoked with cyclic sampling and r(x) = 0, algorithm (4a)–(4b) reduces to DIAG [23] and WPG [19]. We let z̄t := 1n ∑n i=1 z t i . The update (4a) yields z̄t = z̄t−1 + (ztit − z t−1 it )/n. (5) This update can be finished with O(d) operations if {zti}ni=1 are stored with O(nd) memory. Furthermore, to increase robustness and simplify the convergence analysis, we impose a damping step to zi and z̄ when each epoch finishes. The proximal damped Finito/MISO method is listed in Algorithm 1. Note that the damping step does not incur additional memory requirements. A more practical implementation of Algorithm 1 is referred to Algorithm 3 in Appendix A. 2.1 Fixed-point recursion reformulation Algorithm (4a)–(4b) can be reformulated into a fixed-point recursion in {zi}ni=1. Such a fixed-point recursion will be utilized throughout the paper. To proceed, we define z = col{z1, · · · , zn} ∈ Rnd and introduce the average operator A : Rnd → Rd as Az = 1n ∑n i=1 zi. We further define the i-th block coordinate operator Ti : Rnd → Rnd as Tiz=col{z1, · · · , (I−α∇fi) ◦ proxαr(Az), · · · , zn} where I denotes the identity mapping. When applying Ti, it is noted that the i-th block coordinate in z is updated while the others remain unchanged. Proposition 1. Prox-DFinito with fixed cyclic sampling order π is equivalent to the following fixed-point recursion (see proof in Appendix B.1.) z(k+1)n = (1− θ)zkn + θTπzkn (6) where Tπ = Tπ(n) ◦ · · · ◦ Tπ(1). Furthermore, variable xt can be recovered by xt = proxαr ◦ Azt, t = 0, 1, 2, · · · (7) Similar result also hold for random reshuffling scenario. Proposition 2. Prox-DFinito with random reshuffling is equivalent to z(k+1)n = (1− θ)zkn + θTτkzkn (8) where Tτk = Tτk(n) ◦ · · · ◦ Tτk(1). Furthermore, variable xt can be recovered by following (7). 2.2 Optimality condition Assume there exists x? that minimizes F (x) + r(x), i.e., 0 ∈ ∇F (x?) + ∂ r(x?). Then the relation between the minimizer x? and the fixed-point z? of recursion (6) and (8) can be characterized as: Proposition 3. x? minimizes F (x)+r(x) if and only if there is z? so that (proof in Appendix B.2) z? = Tiz?, ∀ i ∈ [n], (9) x? = proxαr ◦ Az?. (10) Remark 1. If x? minimizes F (x) + r(x), it holds from (9) and (10) that z?i = (I−α∇fi) ◦ proxαr(Az?) = x? − α∇fi(x?) for any i ∈ [n]. 2.3 An order-specific norm To gauge the influence of different sampling orders, we now introduce an order-specific norm. Definition 1. Given z = col{z1, · · · , zn} ∈ Rnd and a fixed cyclic order π, we define ‖z‖2π = n∑ i=1 i n ‖zπ(i)‖2 = 1 n ‖zπ(1)‖2 + 2 n ‖zπ(2)‖2 + · · ·+ ‖zπ(n)‖2 as the π-specific norm. For two different cyclic orders π and π′, it generally holds that ‖z‖2π 6= ‖z‖2π′ . Note that the coefficients in ‖z‖2π are delicately designed for technical reasons (see Lemma 1 and its proof in the appendix). The order-specific norm facilitates the performance comparison between two orderings. 3 Convergence Analysis In this section we establish the convergence rate of Prox-DFinito with cyclic sampling and random reshuffling in convex and strongly convex scenarios, respectively. 3.1 The convex scenario We first study the convex scenario under the following assumption: Assumption 1 (Convex). Each function fi(x) is convex and L-smooth. It is worth noting that the convergence results on cyclic sampling and random reshuffling for the convex scenario are quite limited except for [22, 33, 5, 18]. Cyclic sampling and shuffling-once. We first introduce the following lemma showing that Tπ is non-expansive with respect to ‖ · ‖π , which is fundamental to the convergence analysis. Lemma 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.1) ‖Tπu− Tπv‖2π ≤ ‖u− v‖2π, ∀u,v ∈ Rnd. (11) Recall (6) that the sequence zkn is generated through z(k+1)n = Sπz(kn). Since Sπ = (1−θ)I+θTπ and Tπ is non-expansive, we can prove the distance ‖z(k+1)n−z(kn)‖2 will converge to 0 sublinearly: Lemma 2. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds for any k = 0, 1, · · · that (see proof in Appendix ‖z(k+1)n − zkn‖2π ≤ θ (k + 1)(1− θ) ‖z0 − z?‖2π (12) where θ ∈ (0, 1) is the damping parameter. With Lemma 2 and the relation between xt and zt in (7), we can establish the convergence rate: Theorem 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.3) min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (13) where θ ∈ (0, 1) and C = ( 2 αL )2 log(n)+1 n ‖z 0 − z?‖2π . Remark 2. Inspired by reference [16], one can take expectation over cyclic order π in (13) to obtain the convergence rate of Prox-DFinito shuffled once before training begins (with C =( 2 αL )2 (n+1)(log(n)+1) 2n2 ‖z 0 − z?‖2): E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (14) Random reshuffling. We let τk denote the sampling order used in the k-th epoch. Apparently, τk is a uniformly distributed random variable with n! realizations. With the similar analysis technique, we can also establish the convergence rate under random reshuffling in the expectation sense. Theorem 2. Under Assumption 1, if step-size 0 < α ≤ 2L and data is sampled with random reshuffling, it holds that (see proof in Appendix D.2) E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (15) where θ ∈ (0, 1) and C = ( 5 3αL )2 1 n‖z 0 − z?‖2. Comparing (15) with (13), it is observed that random reshuffling replaces the constant ‖z0 − z?‖2π by ‖z0 − z?‖2 and removes the log(n) term in the upper bound. 3.2 The strongly convex scenario In this subsection, we study the convergence rate of Prox-DFinito under the following assumption: Assumption 2 (Strongly Convex). Each function fi(x) is µ-strongly convex and L-smooth. Theorem 3. Under Assumption 2, if step-size 0 < α ≤ 2µ+L , it holds that (see proof in Appendix E) (E) ‖xkn − x?‖2 ≤ ( 1− 2θαµL µ+ L )k C (16) where θ ∈ (0, 1) and C= { log(n)+1 n ‖z 0−z?‖2π with π-order cyclic sampling, 1 n‖z 0 − z?‖2 with random reshuffling. Remark 3. Note when θ → 1, Prox-DFinito actually reaches the best performance, so damping is essentially not necessary in strongly convex scenario. 3.3 Comparison with the existing results Recalling ‖z‖2π = ∑n i=1 i n‖zπ(i)‖ 2, it holds that 1 n ‖z‖2 ≤ ‖z‖2π ≤ ‖z‖2, ∀z, π. (17) For a fair comparison with existing works, we consider the worst case performance of cyclic sampling by relaxing ‖z0−z?‖2π to its upper bound ‖z0−z?‖2. Letting α = O(1/L), θ = 1/2 and assuming 1 n‖z 0 − z?‖2 = O(1), the convergence rates derived in Theorems 1–3 reduce to C-Cyclic = Õ ( L2/k ) , C-RR = O ( L2/k ) SC-Cyclic = Õ ( (1− 1/κ)k ) , SC-RR = O ( (1− 1/κ)k ) . where “C” denotes “convex” and “SC” denotes “strongly convex”, κ = L/µ, and Õ(·) hides the log(n) factor. Note that all rates are in the epoch-wise sense. These rates can be translated into the the gradient complexity (equivalent to sample complexity) of Prox-DFinito to reach an -accurate solution. The comparison with existing works are listed in Table 1. Different metrics. Except for [5] and our Prox-DFinito algorithm whose convergence analyses are based on the gradient norm in the convex and smooth scenario, results in other references are based on function value metric (i.e., objective error F (xkn)− F (x?)). The function value metric can imply the gradient norm metric, but not always vice versa. To comapre Prox-DFinito with other established results in the same metric, we have to transform the rates in other references into the gradient norm metric. The comparison is listed in Table 1. When the gradient norm metric is used, we observe that the rates of Prox-DFinito match that with gradient descent, and are state-of-the-art compared to the existing results. However, the rate of Prox-DFinito in terms of the function value is not known yet (this unknown rate may end up being worse than those of the other methods). For the non-smooth scenario, our metric ming∈∂r(x) ‖∇F (x) + ∂r(x)‖2 may not be bounded by the functional suboptimality F (x) + r(x)− F (x?)− r(x?), and hence Prox-DFinito results are not comparable with those in [21, 35, 37, 33, 18]. The results listed in Table 1 are all for the smooth scenario of [21, 35, 37, 33, 18], and we use “Support Prox” to indicate whether the results cover the non-smooth scenario or not. Assumption scope. Except for references [18, 35] and Proximal GD algorithm whose convergence analyses are conducted by assuming the average of each function to be L̄-smooth (and perhaps µ̄-strongly convex), results in other references are based on a stronger assumption that each summand function to be L-smooth (and perhaps µ-strongly convex). Note that L̄ can be much smaller than L sometimes. To compare [18, 35] and Proximal GD with other references under the same assumption, we let each L = L̄ in Table 1. However, it is worth noting that when each Li is drastically different from each other and can be evaluated precisely, the results relying on L̄ (e.g., [35] and [18]) can be much better than the results established in this work. Comparison with GD. It is observed from Table 1 that Prox-DFinito with cyclic sampling or random reshuffling is no-worse than Proximal GD. It is the first no-worse-than-GD result, besides the independent and concurrent work [18], that covers both the non-smooth and the convex scenarios for variance-reduction methods under without-replacement sampling orders. The pioneering work DIAG [23] established a similar result only for smooth and strongly-convex problems2. Comparison with RR/CS methods. Prox-DFinito achieves the nearly state-of-the-art gradient complexity in both convex and strongly convex scenarios (except for the convex and smooth case due to the weaker metric adopted) among known without-replacement stochastic approaches to solving the finite-sum optimization problem (1), see Table 1. In addition, it is worth noting that in Table 1, algorithms of [33, 35, 23] and our Prox-DFinito have an O(nd) memory requirement while others only need O(d) memory. In other words, Prox-DFinito is memory-costly in spite of its superior theoretical convergence rate and sample complexity. Comparison with uniform-iid-sampling methods. It is known that uniform-sampling variancereduction can achieve anO(max{n,L/µ} log(1/ )) sample complexity for strongly convex problems [14, 26, 6] and O(L2/ ) (when using metric E‖∇F (x)‖2) for convex problems [26]. In other words, these uniform-sampling methods have sample complexities that are independent of sample size n. Our achieved results (and other existing results listed in Table 1 and [18]) for random reshuffling or worstcase cyclic sampling cannot match with uniform-sampling yet. However, this paper establishes that Prox-DFinito with the optimal cyclic order, in the highly data-heterogeneous scenario, can achieve an Õ(L2/ ) sample complexity in the convex scenario, which matches with uniform-sampling up to a log(n) factor, see the detailed discussion in Sec. 4. To our best knowledge, it is the first result, at least in certain scenarios, that variance reduction under without-replacement sampling orders can match with its uniform-sampling counterpart in terms of their sample complexity upper bound. Nevertheless, it still remains unclear how to close the gap in sample complexity between variance reduction under without-replacement sampling and uniform sampling in the more general settings (i.e., settings other than highly data-heterogeneous scenario). 2While DIAG is established to outperform gradient descent in [23], we find its convergence rate is still on the same order of GD. Its superiority to GD comes from the constant improvement, not order improvement. 4 Optimal Cyclic Sampling Order Sec.3.3 examines the worst case gradient complexity of Prox-DFinito with cyclic sampling, which is worse than random reshuffling by a factor of log(n) in both convex and strongly convex scenarios. In this section we examine how Prox-DFinito performs with optimal cyclic sampling. 4.1 Optimal cyclic sampling Given sample size n, step-size α, epoch index k, and constants L, µ and θ, it is derived from Theorem 1 that the rate of π-order cyclic sampling is determined by constant ‖z0 − z?‖2π = n∑ i=1 i n ‖z0π(i) − z ? π(i)‖ 2. (18) We define the corresponding optimal cyclic order as follows. Definition 2. An optimal cyclic sampling order π? of Prox-DFinito is defined as π? := arg min π {‖z0 − z?‖2π}. (19) Such an optimal cyclic order can be identified as follows (see proof in Appendix F). Proposition 4. The optimal cyclic order for Prox-DFinito is the reverse order of {‖z0i − z?i ‖2}ni=1. Remark 4 (IMPORTANCE INDICATOR). Proposition 4 implies that ‖z0i − z?i ‖2 can be used as an importance indicator of sample i. Recall z?i = x ? − α∇fi(x?) from Remark 1. If z0i is initialized as 0, the importance indicator of sample i reduces to ‖x? − α∇fi(x?)‖2, which is determined by both x? and∇fi(x?). If z0i is initialized close to x?, we then have ‖z0i − z?i ‖2 ≈ α2‖∇fi(x?)‖2. In other words, the importance of sample i can be measured by ‖∇fi(x?)‖, which is consistent with the importance indicator in uniform-iid-sampling [41, 40]. 4.2 Optimal cyclic sampling can achieve sample-size-independent complexity Recall from Theorem 1 that the sample complexity of Prox-DFinito with cyclic sampling in the convex scenario is determined by (log(n)/n)‖z0 − z?‖2π . From (17) we have 1 n ‖z0 − z?‖2 ≤ ‖z0 − z?‖2π ≤ ‖z0 − z?‖2, ∀z, π. (20) In Sec. 3.3 we considered the worst case performance of cyclic sampling, i.e., we bound ‖z0 − z?‖2π with its upper bound ‖z0 − z?‖2. In this section, we will examine the best case performance using the lower bound ‖z0 − z?‖2/n, and provide a scenario in which such best case performance is achievable. We assume ‖z0 − z?‖2/n = O(1) as in previous sections. Proposition 5. Given fixed constants n, α, k, θ, L, and optimal cyclic order π?, if the condition ρ := ‖z0 − z?‖2π? ‖z0 − z?‖2 = O ( 1 n ) (21) holds, then Prox-DFinito with optimal cyclic sampling achieves sample complexity Õ(L2/ ). The above proposition can be proved by directly substituting (21) into Theorem 1. In the following, we discuss a data-heterogeneous scenario in which relation (21) holds. A data-heterogeneous scenario. To this end, we let x? = col{x?, · · · , x?} and ∇f(x?) = col{∇f1(x?), · · · ,∇fn(x?)}, it follows from Remark 1 that z? = x? − α∇f(x?). If we set z0 = 0 (which is common in the implementation) and α = 1/L (the theoretically suggested step-size), it then holds that ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2. Next, we assume ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2 = nβi−1 (0 < β < 1) holds. Under such assumption, the optimal cyclic order will be π? = (1, 2, · · · , n). Now we examine ‖z0 − z?‖2π? and ‖z0 − z?‖2: n∑ i=1 ‖z0i − z?i ‖2 = n n∑ i=1 βi−1 ≈ n 1− β , n∑ i=1 i n ‖z0i − z?i ‖2 = n∑ i=1 iβi−1 ≈ 1 (1− β)2 when n is large, which implies that ρ = ‖z0 − z?‖2π?/‖z0 − z?‖2 = O(1/n) since β is a constant independent of n. With Proposition 5, we know Prox-DFinito with optimal cyclic sampling can achieve Õ(L2/ ), which is independent of sample size n. Note that ‖∇fi(x?)‖2 = nβi−1 implies a data-heterogeneous scenario where β can roughly gauge the variety of data samples. 4.3 Adaptive importance reshuffling Algorithm 2 Adaptive Importance Reshuffling Initialize: w0(i) = ‖z0i − z̄0‖2 for i ∈ [n]; for epoch k = 0, 1, 2, · · · do Reshuffle [n] based on the vector wk; Update a Prox-DFinito epoch; Update wk+1 according to (22); end for The optimal cyclic order decided by Proposition 4 is not practical since the importance indicator of each sample depends on the unknown z?i = x?−α∇fi(x?). This problem can be overcome by replacing z?i by its estimate z kn i , which leads to an adaptive importance reshuffling strategy. We introduce w ∈ Rn as an importance indicating vector with each element wi indicating the importance of sample i and initialized as w0(i) = ‖z0i − z̄0‖2, ∀ i ∈ [n]. In the k-th epoch, we draw sample i earlier if wk(i) is larger. After the k-th epoch, w will be updated as wk+1(i) = (1− γ)wk(i) + γ‖z0i − z (k+1)n i ‖ 2, (22) where i ∈ [n] and γ ∈ (0, 1) is a fixed damping parameter. Suppose zkni → z?i , the above recursion will guarantee wk(i)→ ‖z0i − z?i ‖2. In other words, the order decided by wk will gradually adapt to the optimal cyclic order as k increases. Since the order decided by importance changes from epoch to epoch, we call this approach adaptive importance reshuffling and list it in Algorithm 2. We provide the convergence guarantees of the adaptive importance reshuffling method in Appendix G. 5 Numerical Experiments 5.1 Comparison with SVRG and SAGA under without-replacement sampling orders In this experiment, we compare DFinito with SVRG [14] and SAGA [7] under without-replacement sampling (RR, cyclic sampling). We consider a similar setting as in [18, Figure 2], where all step sizes are chosen as the theoretically optimal one, see Table 2 in Appendix H. We run experiments for the regularized logistic regression problem, i.e. problem (1) with fi(x) = log (1 + exp(−yi〈wi, x〉)) + λ 2 ‖x‖ 2 with three widely-used datasets: CIFAR-10 [15], MNIST [8], and COVTYPE [29]. This problem is L-smooth and µ-strongly convex with L = 14nλmax(W TW )+λ and µ = λ. From Figure 1, it is observed that DFinito outperforms SVRG and SAGA in terms of gradient complexity under without-replacement sampling orders with their best-known theoretical rates. The comparison with SVRG and SAGA with the practically optimal step sizes is in Appendix J. 5.2 DFinito with cyclic sampling Justification of the optimal cyclic sampling order. To justify the optimal cyclic sampling order π? suggested in Proposition 4, we test DFinito with eight arbitrarily-selected cyclic orders, and compare them with the optimal cyclic ordering π? as well as the adaptive importance reshuffling method (Algorithm 2). To make the comparison distinguishable, we construct a least square problem with heterogeneous data samples with n = 200, d = 50, L = 100, µ = 10−2 (see Appendix I for the constructed problem). The constructed problem is with ρ = ‖z0−z?‖2π∗/‖z0−z?‖22 = 0.006 when z0i = 0, x 0 = 0, and α = 13L , which is close to 1/n = 0.005. In the left plot in Fig. 2, it is observed that the optimal cyclic sampling achieves the fastest convergence rate. Furthermore, the adaptive shuffling method can match with the optimal cyclic ordering. These observations are consistent with our theoretical results derived in Sec. 4.2 and 4.3. Optimal cyclic sampling can achieve sample-size-independent complexity. It is established in [26] that Finito with uniform-iid-sampling can achieve n-independent gradient complexity with α = n8L . In this experiment, we compare DFinito (α = 2 L ) with Finito under uniform sampling (8 runs, α = n8L ) in a convex and highly heterogeneous scenario (ρ = O( 1 n )). The constructed problem is with n = 500, d = 20, L = 0.3, θ = 0.5 and ‖z0i −z?i ‖ = 10000∗0.1i−1, 1 ≤ i ≤ n (see detailed initialization in Appendix J). We also depict DFinito with random-reshuffling (8 runs) as another baseline. In the right plot of Figure 2, it is observed that the convergence curve of DFinito with π?-cyclic sampling matches with Finito with uniform sampling. This implies DFinito can achieve the same n-independent gradient complexity as Finito with uniform sampling. 5.3 More experiments We conduct more experiments in Appendix J. First, we compare DFinito with GD/SGD to justify its empirical superiority to these methods. Second, we validate how different data heterogeneity will influence optimal cyclic sampling. Third, we examine the performance of SVRG, SAGA, and DFinito under without/with-replacement sampling using grid-search (not theoretical) step sizes. 6 Conclusion and Discussion This paper develops Prox-DFinito and analyzes its convergence rate under without-replacement sampling in both convex and strongly convex scenarios. Our derived rates are state-of-the-art compared to existing results. In particular, this paper derives the best-case convergence rate for ProxDFinito with cyclic sampling, which can be sample-size-independent in the highly data-heterogeneous scenario. A future direction is to close the gap in gradient complexity between variance reduction under without-replacement and uniform-iid-sampling in the more general setting.
1. What are the strengths and weaknesses of the proposed method Prox-DFinito? 2. How does the reviewer assess the guarantees provided by the method in the convex case? 3. How do the results of the paper compare to state-of-the-art ones, particularly in the strongly convex case? 4. What are the issues with the complexity bounds in the non-proximal scenario? 5. How does the reviewer evaluate the significance of the theoretical results in the paper? 6. Are there any concerns regarding the comparison with related work in the paper?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a new method called Prox-DFinito based on the proximal Finito with without-replacement sampling. The authors derive complexity bounds for the proposed method in convex (for making the squared norm of the gradient small) and strongly convex cases (for making the squared distance to the solution small) that match under some additional assumptions the rate of Gradient Descent. Moreover, under additional assumptions on the objective function, the authors show sample-size independent bound in the convex case. The proofs are non-standard but clean and easy to follow. However, the paper has several strong weaknesses. Review Strengths Clarity. The paper is clearly written and well-organized. Interesting proofs. The proofs of the theoretical results of the paper are non-standard and, therefore, valuable. In addition, the proofs are easy to follow and do not contain inaccuracies. Weaknesses Weak guarantees in the convex case. In the convex case (i.e., all f i are convex), the authors establish complexity bounds ensuring that dist ( | ∇ F ( x k n ) − ∂ r ( x k n ) | 2 , 0 ) = min g ∈ ∂ r ( x k n ) | ∇ f ( x k n ) + g | 2 ≤ ε . In other papers and, in particular, in [18], bounds are obtained to ensure F ( x k n ) − F ( x ∗ ) ≤ ε (no regularization). When the objective is convex and smooth it is possible that the gradient is small but functional suboptimality is huge. Moreover, it is more important to achieve small functional suboptimality rather than the small norm of the gradient. Although the proofs are interesting, the result in the convex case is not strong enough and cannot be directly compared with known results. The authors should clarify it in Table 1 since in the cited papers the bounds in the generally convex case are established for functional suboptimality, not for the squared gradient norm. Moreover, the authors should clarify what do they mean by such a comparison. In the non-proximal scenario, one can use classical inequality | F ( x ) | 2 ≤ 2 L ( F ( x ) − F ( x ∗ ) ) , but there are no analogs of such inequality for the composite problems. This place should be carefully clarified. Complexity bounds depend on 1 n | z 0 − z ∗ | π or 2 2 . This norm has implicit dependence on the heterogeneity of local loss functions f i . Indeed, if z 0 is close to x ∗ , then for RR C ∼ α 2 n ∑ i = 1 n | ∇ f i ( x ∗ ) | where C is the factor appearing in all upper bounds.In other words, even if the starting point is extremely close to the solution the constant C can be large. This observation immediately implies that all derived rates in the paper can be arbitrarily worse than known results. In contrast, the concurrent work [18] does not have this issue. Results are weaker than state-of-the-art ones. In view of the first weakness, the complexity bounds in the convex case are weaker by default since they do not provide guarantees for the functional suboptimality. Next, in the strongly convex case, much tighter results are given in [35], where the authors use L being the average of individual smoothness constants (that can be almost n times smaller than the worst one) and μ being the strong convexity constant of the average and do not require individual loss functions to be strongly convex. In contrast, this paper uses the worst L for all summands and relies on the assumption that all f i are μ -strongly convex. Moreover, in view of weakness 2, it is even impossible to fairly compare the results without additional assumptions. Incomplete comparison with the related work. The paper tries to create an impression that the obtained results are the current state-of-the-art. However, it is not true. As it was mentioned earlier, the derived results are weaker than the state-of-the-art ones but the authors do not write about such important details. Moreover, since [18] is the recent concurrent work, the authors should add more details in the main part of the paper. First of all, the rates from [18] should be added in Table 1 (otherwise it creates a biased impression). Next, the authors should explicitly write that the results in the convex case from [18] are stronger (see weakness 1). Moreover, [18] contains some results for the "Big Data regime" establishing O ( n L μ log ⁡ 1 ε ) complexity in the strongly convex case and it is shown without assuming strong convexity of each summand. Finally, rates from [18] do not have the issue described in weakness 2. Therefore, the authors should write about all of these details in the paper and at least briefly mention them somewhere in the main part. Questions and comments Table 1, results from [33] and [5]: The bounds are strange since their "physical dimension" is incorrect. That is, the complexity should be dimensionless quantity whereas L 2 μ and L μ 2 are not. Please, correct these bounds. line 42, "accelerated": This word has a certain meaning (Nesterov's acceleration) in the optimization literature that differs from what the authors want to say (e.g., see d'Aspremont, Alexandre, Damien Scieur, and Adrien Taylor. "Acceleration methods." arXiv preprint arXiv:2101.09545 (2021).). The authors should rewrite the sentence. lines 46-47, "In the generally convex scenario, existing rates for without-replacement sampling with variance reduction are still far worse than GD": this is not true because [18] exists. line 101, "Note that the damping step does not incur additional memory requirements": In theory, it is still O ( n d ) , but in practice, there is a big difference between n d and 2 n d . Algorithm 3: The algorithm is not equivalent to Algorithm 1. inequality (54), the last line: the identity is not correct since ∑ t = 0 j − i − 1 ≠ ( 1 + 1 n ) j − i − 1 . However, one can stop one line earlier since it is not important for the next arguments in the proof. Comment after the rebuttal I thank the authors for their detailed response. I have read it carefully. Bounds in the convex case. Both authors and me agree that bounds for the norm of the gradient are weaker than bounds for functional suboptimality, so my concern is still valid. However, I acknowledge that before this work and [18], which is written independently and has different analysis, are the first results for VR methods with without-replacement sampling that match the rate of GD in the convex case under the assumption that all f i and F have the same smoothness constant. Moreover, in the "highly-heterogeneous scenario," the rate derived in this paper is independent of n that I find quite interesting. On the $\frac{1}{n}|z^0 - z^|^2$.* Indeed, if z i 0 = x 0 − α ∇ f i ( x 0 ) , then there is no issue. I suggest the authors write about this initialization in the main text. I think it is important for the comparison with other results. Revised Table 1 and clarifications about the comparison with other works. Now, these parts are much more transparent for the readers. I believe these corrections are very important. On Algorithm 3. Thank you for the clarifications. Now I see the equivalence. Initially, I was confused because of the difference in the formulas for z i t t in Algorithms 1 and 3. However, after your clarifications, I realized that the two methods are indeed equivalent. I suggest the authors add these clarifications to the paper. Overall, although the paper has several limitations, it does contain some valuable contributions. In particular, the idea of using order-specific norm in the analysis is of separate interest. Taking all these aspects into account, I decided to increase my score from 4 to 6.
NIPS
Title An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders Abstract When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically. 1 Introduction We study the finite-sum composite optimization problem min x∈Rd F (x) + r(x) and F (x) = 1 n n∑ i=1 fi(x). (1) where each fi(x) is differentiable and convex, and the regularization function r(x) is convex but not necessarily differentiable. This formulation arises in many problems in machine learning [34, 39, 14], distributed optimization [20, 3, 19], and signal processing [4, 9]. The leading methods to solve (1) are first-order algorithms such as stochastic gradient descent (SGD) [28, 2] and stochastic variance-reduced methods [14, 6, 7, 17, 10, 32]. In the implementation of ∗Equal Contribution. Correspondence to: Kun Yuan 35th Conference on Neural Information Processing Systems (NeurIPS 2021). these methods, each fi(x) can be sampled either with or without replacement. Without-replacement sampling draws each fi(x) exactly once during an epoch, which is numerically faster than withreplacement sampling and more cache-friendly; see the experiments in [1, 38, 11, 7, 37, 5]. This has triggered significant interests in understanding the theory behind without-replacement sampling. Among the most popular without-replacement approaches are cyclic sampling, random reshuffling, and shuffling-once. Cyclic sampling draws the samples in a cyclic order. Random reshuffling reorders the samples at the beginning of each sample epoch. The third approach, however, shuffles data only once before the training begins. Without-replacement sampling have been extensively studied for SGD. It was established in [1, 38, 11, 22, 24] that without-replacement sampling enables SGD with faster convergence For example, it was proved that without-replacement sampling can speed up uniform-iid-sampling SGD from Õ(1/k) to Õ(1/k2) (where k is the iteration) for strongly-convex costs in [11, 12], and O(1/k1/2) to O(1/k) for the convex costs in [24, 22]. [31] establishes a tight lower bound for random reshuffling SGD. Recent works [27, 22] close the gap between upper and lower bounds. Authors of [22] also analyzes without-replacement SGD with non-convex costs. In contrast to the mature results in SGD, variance-reduction under without-replacement sampling are less understood. Variance reduction strategies construct stochastic gradient estimators with vanishing gradient variance, which allows for much larger learning rate and hence speed up training process. Variance reduction under without-replacement sampling is difficult to analyze. In the strongly convex scenario, [37, 33] provide linear convergence guarantees for SVRG/SAGA with random reshuffling, but the rates are worse than full-batch gradient descent (GD). Authors of [35, 23] improved the rate so that it can match with GD. In convex scenario, existing rates for without-replacement sampling with variance reduction, except for the rate established in an independent and concurrent work [18], are still far worse than GD [33, 5], see Table 1. Furthermore, no existing rates for variance reduction under without-replacement sampling orders, in either convex or strongly convex scenarios, can match those under uniform-iid-sampling which are essentially sample-size independent. There is a clear gap between the known convergence rates and superior practical performance for without-replacement sampling with variance reduction. 1.1 Main results This paper narrows such gap by providing convergence analysis and rates for proximal DFinito, a proximal damped variant of Finito/MISO [7, 17, 26], which is a well-known variance reduction algorithm, under without-replacement sampling orders. Our main achieved results are: • We develop a proximal damped variant of Finito/MISO called Prox-DFinito and establish its gradient complexities with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. All these rates match with gradient descent, and are state-of-the-art (up to logarithm factors) compared to existing results for without-replacement sampling with variance-reduction, see Table 1. • Our novel analysis can gauge how a cyclic order will influence the rate of Prox-DFinito with cyclic sampling. This allows us to identify the optimal cyclic sampling ordering. ProxDFinito with optimal cyclic sampling, in the highly data-heterogeneous scenario, can attain a sample-size-independent convergence rate, which is the first result, to our knowledge, that can match with uniform-iid-sampling with variance reduction in certain scenarios. We also propose a numerical method to discover the optimal cyclic ordering cheaply. 1.2 Other related works Our analysis on cyclic sampling is novel. Most existing analyses unify random reshuffling and cyclic sampling into the same framework; see the SGD analysis in [11], the variance-reduction analysis in [10, 36, 23, 37], and the coordinate-update analysis in [5]. These analyses are primarily based on the “sampled-once-per-epoch” property and do not analyze the orders within each epoch, so they do not distinguish cyclic sampling from random reshuffling in analysis. [16] finds that random reshuffling SGD is basically the average over all cyclic sampling trials. This implies cyclic sampling can outperform random reshuffling with a well-designed sampling order. However, [16] does not discuss how much better cyclic sampling can outperform random reshuffling and how to achieve such cyclic order. Different from existing literatures, our analysis introduces an order-specific norm to gauge how cyclic sampling performs with different fixed orders. With such norm, we are able to clarify the worst-case and best-case performance of variance reduction with cyclic sampling. Simultaneously and independently, a recent work [18] also provided an improved rates for variance reduction under without-replacement sampling orders that can match with gradient descent. However, [18] does not discuss whether and when variance reduction with replacement sampling can match with uniform sampling. In addition, [18] studies SVRG while this paper studies Finito/MISO. The convergence analyses in these two works are very different. The detailed comparison between this work and [18] can be referred to Sec. 3.3. 1.3 Notations Throughout the paper we let col{x1, · · · , xn} denote a column vector formed by stacking x1, · · · , xn. We let [n] := {1, · · · , n} and define the proximal operator as proxαr(x) := arg min y∈Rd {α r(y) + 1 2 ‖y − x‖2} (2) which is single-valued when r is convex, closed and proper. In general, we say A is an operator and write A : X → Y if A maps each point in space X to another space Y . So A(x) ∈ Y for all x ∈ X . For simplicity, we write Ax = A(x) and A ◦ Bx = A(B(x)) for operator composition. Cyclic sampling. We define π := (π(1), π(2), . . . , π(n)) as an arbitrary determined permutation of sample indexes. The order π is fixed throughout the entire learning process under cyclic sampling. Random reshuffling. When starting each epoch, a random permutation τ := (τ(1), τ(2), ..., τ(n)) is generated to specify the order to take samples. Let τk denote the permutation of the k-th epoch. Algorithm 1 Prox-DFinito Input: z̄0 = 1n n∑ i=1 z0i , step-size α, and θ ∈ (0, 1); for epoch k = 0, 1, 2, · · · do for iteration t = kn+ 1, kn+ 2, · · · , (k + 1)n do xt−1 = proxαr(z̄ t−1); Pick it with some rule; Update ztit and z̄ t according to (4a) and (5); end for z (k+1)n i ← (1− θ)zkni + θz (k+1)n i for any i ∈ [n]; . a damping step z̄(k+1)n ← (1− θ)z̄kn + θz̄(k+1)n; . a damping step end for 2 Proximal Finito/MISO with Damping The proximal gradient method to solve problem (1) is zti = x t−1 − α∇fi(xt−1), ∀ i ∈ [n] (3a) xt = proxαr ( 1 n n∑ i=1 zti ) (3b) To avoid the global average that passes over all samples, we propose to update one zi per iteration: zti = { xt−1 − α∇fi(xt−1), i = it zt−1i , i 6= it (4a) xt = proxαr ( 1 n n∑ i=1 zti ) . (4b) When it is invoked with uniform-iid-sampling and r(x) = 0, algorithm (4a)–(4b) reduces to Finito/MISO [7, 17]. When it is invoked with cyclic sampling and r(x) = 0, algorithm (4a)–(4b) reduces to DIAG [23] and WPG [19]. We let z̄t := 1n ∑n i=1 z t i . The update (4a) yields z̄t = z̄t−1 + (ztit − z t−1 it )/n. (5) This update can be finished with O(d) operations if {zti}ni=1 are stored with O(nd) memory. Furthermore, to increase robustness and simplify the convergence analysis, we impose a damping step to zi and z̄ when each epoch finishes. The proximal damped Finito/MISO method is listed in Algorithm 1. Note that the damping step does not incur additional memory requirements. A more practical implementation of Algorithm 1 is referred to Algorithm 3 in Appendix A. 2.1 Fixed-point recursion reformulation Algorithm (4a)–(4b) can be reformulated into a fixed-point recursion in {zi}ni=1. Such a fixed-point recursion will be utilized throughout the paper. To proceed, we define z = col{z1, · · · , zn} ∈ Rnd and introduce the average operator A : Rnd → Rd as Az = 1n ∑n i=1 zi. We further define the i-th block coordinate operator Ti : Rnd → Rnd as Tiz=col{z1, · · · , (I−α∇fi) ◦ proxαr(Az), · · · , zn} where I denotes the identity mapping. When applying Ti, it is noted that the i-th block coordinate in z is updated while the others remain unchanged. Proposition 1. Prox-DFinito with fixed cyclic sampling order π is equivalent to the following fixed-point recursion (see proof in Appendix B.1.) z(k+1)n = (1− θ)zkn + θTπzkn (6) where Tπ = Tπ(n) ◦ · · · ◦ Tπ(1). Furthermore, variable xt can be recovered by xt = proxαr ◦ Azt, t = 0, 1, 2, · · · (7) Similar result also hold for random reshuffling scenario. Proposition 2. Prox-DFinito with random reshuffling is equivalent to z(k+1)n = (1− θ)zkn + θTτkzkn (8) where Tτk = Tτk(n) ◦ · · · ◦ Tτk(1). Furthermore, variable xt can be recovered by following (7). 2.2 Optimality condition Assume there exists x? that minimizes F (x) + r(x), i.e., 0 ∈ ∇F (x?) + ∂ r(x?). Then the relation between the minimizer x? and the fixed-point z? of recursion (6) and (8) can be characterized as: Proposition 3. x? minimizes F (x)+r(x) if and only if there is z? so that (proof in Appendix B.2) z? = Tiz?, ∀ i ∈ [n], (9) x? = proxαr ◦ Az?. (10) Remark 1. If x? minimizes F (x) + r(x), it holds from (9) and (10) that z?i = (I−α∇fi) ◦ proxαr(Az?) = x? − α∇fi(x?) for any i ∈ [n]. 2.3 An order-specific norm To gauge the influence of different sampling orders, we now introduce an order-specific norm. Definition 1. Given z = col{z1, · · · , zn} ∈ Rnd and a fixed cyclic order π, we define ‖z‖2π = n∑ i=1 i n ‖zπ(i)‖2 = 1 n ‖zπ(1)‖2 + 2 n ‖zπ(2)‖2 + · · ·+ ‖zπ(n)‖2 as the π-specific norm. For two different cyclic orders π and π′, it generally holds that ‖z‖2π 6= ‖z‖2π′ . Note that the coefficients in ‖z‖2π are delicately designed for technical reasons (see Lemma 1 and its proof in the appendix). The order-specific norm facilitates the performance comparison between two orderings. 3 Convergence Analysis In this section we establish the convergence rate of Prox-DFinito with cyclic sampling and random reshuffling in convex and strongly convex scenarios, respectively. 3.1 The convex scenario We first study the convex scenario under the following assumption: Assumption 1 (Convex). Each function fi(x) is convex and L-smooth. It is worth noting that the convergence results on cyclic sampling and random reshuffling for the convex scenario are quite limited except for [22, 33, 5, 18]. Cyclic sampling and shuffling-once. We first introduce the following lemma showing that Tπ is non-expansive with respect to ‖ · ‖π , which is fundamental to the convergence analysis. Lemma 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.1) ‖Tπu− Tπv‖2π ≤ ‖u− v‖2π, ∀u,v ∈ Rnd. (11) Recall (6) that the sequence zkn is generated through z(k+1)n = Sπz(kn). Since Sπ = (1−θ)I+θTπ and Tπ is non-expansive, we can prove the distance ‖z(k+1)n−z(kn)‖2 will converge to 0 sublinearly: Lemma 2. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds for any k = 0, 1, · · · that (see proof in Appendix ‖z(k+1)n − zkn‖2π ≤ θ (k + 1)(1− θ) ‖z0 − z?‖2π (12) where θ ∈ (0, 1) is the damping parameter. With Lemma 2 and the relation between xt and zt in (7), we can establish the convergence rate: Theorem 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.3) min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (13) where θ ∈ (0, 1) and C = ( 2 αL )2 log(n)+1 n ‖z 0 − z?‖2π . Remark 2. Inspired by reference [16], one can take expectation over cyclic order π in (13) to obtain the convergence rate of Prox-DFinito shuffled once before training begins (with C =( 2 αL )2 (n+1)(log(n)+1) 2n2 ‖z 0 − z?‖2): E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (14) Random reshuffling. We let τk denote the sampling order used in the k-th epoch. Apparently, τk is a uniformly distributed random variable with n! realizations. With the similar analysis technique, we can also establish the convergence rate under random reshuffling in the expectation sense. Theorem 2. Under Assumption 1, if step-size 0 < α ≤ 2L and data is sampled with random reshuffling, it holds that (see proof in Appendix D.2) E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (15) where θ ∈ (0, 1) and C = ( 5 3αL )2 1 n‖z 0 − z?‖2. Comparing (15) with (13), it is observed that random reshuffling replaces the constant ‖z0 − z?‖2π by ‖z0 − z?‖2 and removes the log(n) term in the upper bound. 3.2 The strongly convex scenario In this subsection, we study the convergence rate of Prox-DFinito under the following assumption: Assumption 2 (Strongly Convex). Each function fi(x) is µ-strongly convex and L-smooth. Theorem 3. Under Assumption 2, if step-size 0 < α ≤ 2µ+L , it holds that (see proof in Appendix E) (E) ‖xkn − x?‖2 ≤ ( 1− 2θαµL µ+ L )k C (16) where θ ∈ (0, 1) and C= { log(n)+1 n ‖z 0−z?‖2π with π-order cyclic sampling, 1 n‖z 0 − z?‖2 with random reshuffling. Remark 3. Note when θ → 1, Prox-DFinito actually reaches the best performance, so damping is essentially not necessary in strongly convex scenario. 3.3 Comparison with the existing results Recalling ‖z‖2π = ∑n i=1 i n‖zπ(i)‖ 2, it holds that 1 n ‖z‖2 ≤ ‖z‖2π ≤ ‖z‖2, ∀z, π. (17) For a fair comparison with existing works, we consider the worst case performance of cyclic sampling by relaxing ‖z0−z?‖2π to its upper bound ‖z0−z?‖2. Letting α = O(1/L), θ = 1/2 and assuming 1 n‖z 0 − z?‖2 = O(1), the convergence rates derived in Theorems 1–3 reduce to C-Cyclic = Õ ( L2/k ) , C-RR = O ( L2/k ) SC-Cyclic = Õ ( (1− 1/κ)k ) , SC-RR = O ( (1− 1/κ)k ) . where “C” denotes “convex” and “SC” denotes “strongly convex”, κ = L/µ, and Õ(·) hides the log(n) factor. Note that all rates are in the epoch-wise sense. These rates can be translated into the the gradient complexity (equivalent to sample complexity) of Prox-DFinito to reach an -accurate solution. The comparison with existing works are listed in Table 1. Different metrics. Except for [5] and our Prox-DFinito algorithm whose convergence analyses are based on the gradient norm in the convex and smooth scenario, results in other references are based on function value metric (i.e., objective error F (xkn)− F (x?)). The function value metric can imply the gradient norm metric, but not always vice versa. To comapre Prox-DFinito with other established results in the same metric, we have to transform the rates in other references into the gradient norm metric. The comparison is listed in Table 1. When the gradient norm metric is used, we observe that the rates of Prox-DFinito match that with gradient descent, and are state-of-the-art compared to the existing results. However, the rate of Prox-DFinito in terms of the function value is not known yet (this unknown rate may end up being worse than those of the other methods). For the non-smooth scenario, our metric ming∈∂r(x) ‖∇F (x) + ∂r(x)‖2 may not be bounded by the functional suboptimality F (x) + r(x)− F (x?)− r(x?), and hence Prox-DFinito results are not comparable with those in [21, 35, 37, 33, 18]. The results listed in Table 1 are all for the smooth scenario of [21, 35, 37, 33, 18], and we use “Support Prox” to indicate whether the results cover the non-smooth scenario or not. Assumption scope. Except for references [18, 35] and Proximal GD algorithm whose convergence analyses are conducted by assuming the average of each function to be L̄-smooth (and perhaps µ̄-strongly convex), results in other references are based on a stronger assumption that each summand function to be L-smooth (and perhaps µ-strongly convex). Note that L̄ can be much smaller than L sometimes. To compare [18, 35] and Proximal GD with other references under the same assumption, we let each L = L̄ in Table 1. However, it is worth noting that when each Li is drastically different from each other and can be evaluated precisely, the results relying on L̄ (e.g., [35] and [18]) can be much better than the results established in this work. Comparison with GD. It is observed from Table 1 that Prox-DFinito with cyclic sampling or random reshuffling is no-worse than Proximal GD. It is the first no-worse-than-GD result, besides the independent and concurrent work [18], that covers both the non-smooth and the convex scenarios for variance-reduction methods under without-replacement sampling orders. The pioneering work DIAG [23] established a similar result only for smooth and strongly-convex problems2. Comparison with RR/CS methods. Prox-DFinito achieves the nearly state-of-the-art gradient complexity in both convex and strongly convex scenarios (except for the convex and smooth case due to the weaker metric adopted) among known without-replacement stochastic approaches to solving the finite-sum optimization problem (1), see Table 1. In addition, it is worth noting that in Table 1, algorithms of [33, 35, 23] and our Prox-DFinito have an O(nd) memory requirement while others only need O(d) memory. In other words, Prox-DFinito is memory-costly in spite of its superior theoretical convergence rate and sample complexity. Comparison with uniform-iid-sampling methods. It is known that uniform-sampling variancereduction can achieve anO(max{n,L/µ} log(1/ )) sample complexity for strongly convex problems [14, 26, 6] and O(L2/ ) (when using metric E‖∇F (x)‖2) for convex problems [26]. In other words, these uniform-sampling methods have sample complexities that are independent of sample size n. Our achieved results (and other existing results listed in Table 1 and [18]) for random reshuffling or worstcase cyclic sampling cannot match with uniform-sampling yet. However, this paper establishes that Prox-DFinito with the optimal cyclic order, in the highly data-heterogeneous scenario, can achieve an Õ(L2/ ) sample complexity in the convex scenario, which matches with uniform-sampling up to a log(n) factor, see the detailed discussion in Sec. 4. To our best knowledge, it is the first result, at least in certain scenarios, that variance reduction under without-replacement sampling orders can match with its uniform-sampling counterpart in terms of their sample complexity upper bound. Nevertheless, it still remains unclear how to close the gap in sample complexity between variance reduction under without-replacement sampling and uniform sampling in the more general settings (i.e., settings other than highly data-heterogeneous scenario). 2While DIAG is established to outperform gradient descent in [23], we find its convergence rate is still on the same order of GD. Its superiority to GD comes from the constant improvement, not order improvement. 4 Optimal Cyclic Sampling Order Sec.3.3 examines the worst case gradient complexity of Prox-DFinito with cyclic sampling, which is worse than random reshuffling by a factor of log(n) in both convex and strongly convex scenarios. In this section we examine how Prox-DFinito performs with optimal cyclic sampling. 4.1 Optimal cyclic sampling Given sample size n, step-size α, epoch index k, and constants L, µ and θ, it is derived from Theorem 1 that the rate of π-order cyclic sampling is determined by constant ‖z0 − z?‖2π = n∑ i=1 i n ‖z0π(i) − z ? π(i)‖ 2. (18) We define the corresponding optimal cyclic order as follows. Definition 2. An optimal cyclic sampling order π? of Prox-DFinito is defined as π? := arg min π {‖z0 − z?‖2π}. (19) Such an optimal cyclic order can be identified as follows (see proof in Appendix F). Proposition 4. The optimal cyclic order for Prox-DFinito is the reverse order of {‖z0i − z?i ‖2}ni=1. Remark 4 (IMPORTANCE INDICATOR). Proposition 4 implies that ‖z0i − z?i ‖2 can be used as an importance indicator of sample i. Recall z?i = x ? − α∇fi(x?) from Remark 1. If z0i is initialized as 0, the importance indicator of sample i reduces to ‖x? − α∇fi(x?)‖2, which is determined by both x? and∇fi(x?). If z0i is initialized close to x?, we then have ‖z0i − z?i ‖2 ≈ α2‖∇fi(x?)‖2. In other words, the importance of sample i can be measured by ‖∇fi(x?)‖, which is consistent with the importance indicator in uniform-iid-sampling [41, 40]. 4.2 Optimal cyclic sampling can achieve sample-size-independent complexity Recall from Theorem 1 that the sample complexity of Prox-DFinito with cyclic sampling in the convex scenario is determined by (log(n)/n)‖z0 − z?‖2π . From (17) we have 1 n ‖z0 − z?‖2 ≤ ‖z0 − z?‖2π ≤ ‖z0 − z?‖2, ∀z, π. (20) In Sec. 3.3 we considered the worst case performance of cyclic sampling, i.e., we bound ‖z0 − z?‖2π with its upper bound ‖z0 − z?‖2. In this section, we will examine the best case performance using the lower bound ‖z0 − z?‖2/n, and provide a scenario in which such best case performance is achievable. We assume ‖z0 − z?‖2/n = O(1) as in previous sections. Proposition 5. Given fixed constants n, α, k, θ, L, and optimal cyclic order π?, if the condition ρ := ‖z0 − z?‖2π? ‖z0 − z?‖2 = O ( 1 n ) (21) holds, then Prox-DFinito with optimal cyclic sampling achieves sample complexity Õ(L2/ ). The above proposition can be proved by directly substituting (21) into Theorem 1. In the following, we discuss a data-heterogeneous scenario in which relation (21) holds. A data-heterogeneous scenario. To this end, we let x? = col{x?, · · · , x?} and ∇f(x?) = col{∇f1(x?), · · · ,∇fn(x?)}, it follows from Remark 1 that z? = x? − α∇f(x?). If we set z0 = 0 (which is common in the implementation) and α = 1/L (the theoretically suggested step-size), it then holds that ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2. Next, we assume ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2 = nβi−1 (0 < β < 1) holds. Under such assumption, the optimal cyclic order will be π? = (1, 2, · · · , n). Now we examine ‖z0 − z?‖2π? and ‖z0 − z?‖2: n∑ i=1 ‖z0i − z?i ‖2 = n n∑ i=1 βi−1 ≈ n 1− β , n∑ i=1 i n ‖z0i − z?i ‖2 = n∑ i=1 iβi−1 ≈ 1 (1− β)2 when n is large, which implies that ρ = ‖z0 − z?‖2π?/‖z0 − z?‖2 = O(1/n) since β is a constant independent of n. With Proposition 5, we know Prox-DFinito with optimal cyclic sampling can achieve Õ(L2/ ), which is independent of sample size n. Note that ‖∇fi(x?)‖2 = nβi−1 implies a data-heterogeneous scenario where β can roughly gauge the variety of data samples. 4.3 Adaptive importance reshuffling Algorithm 2 Adaptive Importance Reshuffling Initialize: w0(i) = ‖z0i − z̄0‖2 for i ∈ [n]; for epoch k = 0, 1, 2, · · · do Reshuffle [n] based on the vector wk; Update a Prox-DFinito epoch; Update wk+1 according to (22); end for The optimal cyclic order decided by Proposition 4 is not practical since the importance indicator of each sample depends on the unknown z?i = x?−α∇fi(x?). This problem can be overcome by replacing z?i by its estimate z kn i , which leads to an adaptive importance reshuffling strategy. We introduce w ∈ Rn as an importance indicating vector with each element wi indicating the importance of sample i and initialized as w0(i) = ‖z0i − z̄0‖2, ∀ i ∈ [n]. In the k-th epoch, we draw sample i earlier if wk(i) is larger. After the k-th epoch, w will be updated as wk+1(i) = (1− γ)wk(i) + γ‖z0i − z (k+1)n i ‖ 2, (22) where i ∈ [n] and γ ∈ (0, 1) is a fixed damping parameter. Suppose zkni → z?i , the above recursion will guarantee wk(i)→ ‖z0i − z?i ‖2. In other words, the order decided by wk will gradually adapt to the optimal cyclic order as k increases. Since the order decided by importance changes from epoch to epoch, we call this approach adaptive importance reshuffling and list it in Algorithm 2. We provide the convergence guarantees of the adaptive importance reshuffling method in Appendix G. 5 Numerical Experiments 5.1 Comparison with SVRG and SAGA under without-replacement sampling orders In this experiment, we compare DFinito with SVRG [14] and SAGA [7] under without-replacement sampling (RR, cyclic sampling). We consider a similar setting as in [18, Figure 2], where all step sizes are chosen as the theoretically optimal one, see Table 2 in Appendix H. We run experiments for the regularized logistic regression problem, i.e. problem (1) with fi(x) = log (1 + exp(−yi〈wi, x〉)) + λ 2 ‖x‖ 2 with three widely-used datasets: CIFAR-10 [15], MNIST [8], and COVTYPE [29]. This problem is L-smooth and µ-strongly convex with L = 14nλmax(W TW )+λ and µ = λ. From Figure 1, it is observed that DFinito outperforms SVRG and SAGA in terms of gradient complexity under without-replacement sampling orders with their best-known theoretical rates. The comparison with SVRG and SAGA with the practically optimal step sizes is in Appendix J. 5.2 DFinito with cyclic sampling Justification of the optimal cyclic sampling order. To justify the optimal cyclic sampling order π? suggested in Proposition 4, we test DFinito with eight arbitrarily-selected cyclic orders, and compare them with the optimal cyclic ordering π? as well as the adaptive importance reshuffling method (Algorithm 2). To make the comparison distinguishable, we construct a least square problem with heterogeneous data samples with n = 200, d = 50, L = 100, µ = 10−2 (see Appendix I for the constructed problem). The constructed problem is with ρ = ‖z0−z?‖2π∗/‖z0−z?‖22 = 0.006 when z0i = 0, x 0 = 0, and α = 13L , which is close to 1/n = 0.005. In the left plot in Fig. 2, it is observed that the optimal cyclic sampling achieves the fastest convergence rate. Furthermore, the adaptive shuffling method can match with the optimal cyclic ordering. These observations are consistent with our theoretical results derived in Sec. 4.2 and 4.3. Optimal cyclic sampling can achieve sample-size-independent complexity. It is established in [26] that Finito with uniform-iid-sampling can achieve n-independent gradient complexity with α = n8L . In this experiment, we compare DFinito (α = 2 L ) with Finito under uniform sampling (8 runs, α = n8L ) in a convex and highly heterogeneous scenario (ρ = O( 1 n )). The constructed problem is with n = 500, d = 20, L = 0.3, θ = 0.5 and ‖z0i −z?i ‖ = 10000∗0.1i−1, 1 ≤ i ≤ n (see detailed initialization in Appendix J). We also depict DFinito with random-reshuffling (8 runs) as another baseline. In the right plot of Figure 2, it is observed that the convergence curve of DFinito with π?-cyclic sampling matches with Finito with uniform sampling. This implies DFinito can achieve the same n-independent gradient complexity as Finito with uniform sampling. 5.3 More experiments We conduct more experiments in Appendix J. First, we compare DFinito with GD/SGD to justify its empirical superiority to these methods. Second, we validate how different data heterogeneity will influence optimal cyclic sampling. Third, we examine the performance of SVRG, SAGA, and DFinito under without/with-replacement sampling using grid-search (not theoretical) step sizes. 6 Conclusion and Discussion This paper develops Prox-DFinito and analyzes its convergence rate under without-replacement sampling in both convex and strongly convex scenarios. Our derived rates are state-of-the-art compared to existing results. In particular, this paper derives the best-case convergence rate for ProxDFinito with cyclic sampling, which can be sample-size-independent in the highly data-heterogeneous scenario. A future direction is to close the gap in gradient complexity between variance reduction under without-replacement and uniform-iid-sampling in the more general setting.
1. What is the focus of the paper regarding composite convex minimization problems? 2. What are the strengths of the proposed algorithm, particularly in its convergence rate and practicality? 3. Do you have any concerns or questions about the algorithm, such as the role of the damped step or the nonexpansiveness of T? 4. How does the paper compare the proposed method with other schemes, including SVRG and SAGA? 5. What is the significance of the weighted norm used in the analysis, and how does it contribute to the new method's uniqueness?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a new first-order algorithm using without-replacement strategy to solve a class of composite convex minimization problems. The main idea is to modify the well-known scheme, called Finito/MISO method by applying a without-replacement strategy and a damping step. Under the convexity and L-smoothness of f, the proposed algorithm achieves O(1/k) rate in epoch on the optimality residual. When f is additionally strongly convex, the rate is improved to linear as in standard proximal gradient methods. The analysis relies on a weight norm defined through the order of the underlying shuffling strategy. The authors also compare their method with other schemes such as a coordinate descent method with cyclic rule, and standard GD. Next, the authors also investigate the optimal cyclic rule for sampling and propose an adaptive variant. Numerical examples on standard logistic regression problem are presented to illustrate the performance of the proposed methods. Review Originality: I believe that the result of this paper is new, especially the use of a new weighted norm in this case. In fact, the algorithm can be cast into a variant of the ARock method [Peng et al, 2015], but the use of without-replacement is new. The use of damped step is not new, which has been widely used in fixed-point methods as well as in optimization. The key idea of analysis is to show that the corresponding fixed-point mapping is nonexpansive, which allows to use a damped step and achieve O(1/k) rate in epoch. However, the convergence rate is not really encouraging even it is comparable with GD. The reason is that this method requires to store n auxiliary vectors as in SAGA, making it less practical when n and d are large. Quality/clarity: The paper is well written and well-motivated in general. It does have both algorithmic and theoretical contributions. The technicality sounds and the analysis seems to be nontrivial. Significance: Overall, the paper has nice contribution in terms of new algorithm, especially when using without-replacement strategy. Below are some concrete comments and questions: -- Is the damped step in Algorithm 1 really the key to achieve the O(1/k) rate? In fact, it is not needed for the strongly convex case. I have a feeling that this is only needed if the fixed-point mapping is only non-expansive. Could the authors clarify and discuss this point, because it is highlighted in the paper? -- It seems that the nonexpansiveness of T is rather trivial since it is constituted by a proximal gradient operator which is nonexpansive for such a chosen stepsize. Does it need any other properties to guarantee the non-expansiveness of T in (11)? -- The comparison with SVRG and SAGA using randomized reshuffling rule seems to not make sense. These variants may not work with without-replacement rules unless an appropriate variant (e.g., in [18,37]) is used. The authors may use [18] for SVRG, but for SAGA, which variant is used in the experiments? -- The weighted norm only depends on z0 and z*, and does not depend on the landscape of the problem. This is a very interesting aspect of the new method. Can you also add more discussion on this aspect to elaborate the new contribution?
NIPS
Title An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders Abstract When applying a stochastic algorithm, one must choose an order to draw samples. The practical choices are without-replacement sampling orders, which are empirically faster and more cache-friendly than uniform-iid-sampling but often have inferior theoretical guarantees. Without-replacement sampling is well understood only for SGD without variance reduction. In this paper, we will improve the convergence analysis and rates of variance reduction under without-replacement sampling orders for composite finite-sum minimization. Our results are in two-folds. First, we develop a damped variant of Finito called Prox-DFinito and establish its convergence rates with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. These rates match full-batch gradient descent and are state-of-the-art compared to the existing results for without-replacement sampling with variance-reduction. Second, our analysis can gauge how the cyclic order will influence the rate of cyclic sampling and, thus, allows us to derive the optimal fixed ordering. In the highly data-heterogeneous scenario, Prox-DFinito with optimal cyclic sampling can attain a sample-size-independent convergence rate, which, to our knowledge, is the first result that can match with uniform-iid-sampling with variance reduction. We also propose a practical method to discover the optimal cyclic ordering numerically. 1 Introduction We study the finite-sum composite optimization problem min x∈Rd F (x) + r(x) and F (x) = 1 n n∑ i=1 fi(x). (1) where each fi(x) is differentiable and convex, and the regularization function r(x) is convex but not necessarily differentiable. This formulation arises in many problems in machine learning [34, 39, 14], distributed optimization [20, 3, 19], and signal processing [4, 9]. The leading methods to solve (1) are first-order algorithms such as stochastic gradient descent (SGD) [28, 2] and stochastic variance-reduced methods [14, 6, 7, 17, 10, 32]. In the implementation of ∗Equal Contribution. Correspondence to: Kun Yuan 35th Conference on Neural Information Processing Systems (NeurIPS 2021). these methods, each fi(x) can be sampled either with or without replacement. Without-replacement sampling draws each fi(x) exactly once during an epoch, which is numerically faster than withreplacement sampling and more cache-friendly; see the experiments in [1, 38, 11, 7, 37, 5]. This has triggered significant interests in understanding the theory behind without-replacement sampling. Among the most popular without-replacement approaches are cyclic sampling, random reshuffling, and shuffling-once. Cyclic sampling draws the samples in a cyclic order. Random reshuffling reorders the samples at the beginning of each sample epoch. The third approach, however, shuffles data only once before the training begins. Without-replacement sampling have been extensively studied for SGD. It was established in [1, 38, 11, 22, 24] that without-replacement sampling enables SGD with faster convergence For example, it was proved that without-replacement sampling can speed up uniform-iid-sampling SGD from Õ(1/k) to Õ(1/k2) (where k is the iteration) for strongly-convex costs in [11, 12], and O(1/k1/2) to O(1/k) for the convex costs in [24, 22]. [31] establishes a tight lower bound for random reshuffling SGD. Recent works [27, 22] close the gap between upper and lower bounds. Authors of [22] also analyzes without-replacement SGD with non-convex costs. In contrast to the mature results in SGD, variance-reduction under without-replacement sampling are less understood. Variance reduction strategies construct stochastic gradient estimators with vanishing gradient variance, which allows for much larger learning rate and hence speed up training process. Variance reduction under without-replacement sampling is difficult to analyze. In the strongly convex scenario, [37, 33] provide linear convergence guarantees for SVRG/SAGA with random reshuffling, but the rates are worse than full-batch gradient descent (GD). Authors of [35, 23] improved the rate so that it can match with GD. In convex scenario, existing rates for without-replacement sampling with variance reduction, except for the rate established in an independent and concurrent work [18], are still far worse than GD [33, 5], see Table 1. Furthermore, no existing rates for variance reduction under without-replacement sampling orders, in either convex or strongly convex scenarios, can match those under uniform-iid-sampling which are essentially sample-size independent. There is a clear gap between the known convergence rates and superior practical performance for without-replacement sampling with variance reduction. 1.1 Main results This paper narrows such gap by providing convergence analysis and rates for proximal DFinito, a proximal damped variant of Finito/MISO [7, 17, 26], which is a well-known variance reduction algorithm, under without-replacement sampling orders. Our main achieved results are: • We develop a proximal damped variant of Finito/MISO called Prox-DFinito and establish its gradient complexities with random reshuffling, cyclic sampling, and shuffling-once, under both convex and strongly convex scenarios. All these rates match with gradient descent, and are state-of-the-art (up to logarithm factors) compared to existing results for without-replacement sampling with variance-reduction, see Table 1. • Our novel analysis can gauge how a cyclic order will influence the rate of Prox-DFinito with cyclic sampling. This allows us to identify the optimal cyclic sampling ordering. ProxDFinito with optimal cyclic sampling, in the highly data-heterogeneous scenario, can attain a sample-size-independent convergence rate, which is the first result, to our knowledge, that can match with uniform-iid-sampling with variance reduction in certain scenarios. We also propose a numerical method to discover the optimal cyclic ordering cheaply. 1.2 Other related works Our analysis on cyclic sampling is novel. Most existing analyses unify random reshuffling and cyclic sampling into the same framework; see the SGD analysis in [11], the variance-reduction analysis in [10, 36, 23, 37], and the coordinate-update analysis in [5]. These analyses are primarily based on the “sampled-once-per-epoch” property and do not analyze the orders within each epoch, so they do not distinguish cyclic sampling from random reshuffling in analysis. [16] finds that random reshuffling SGD is basically the average over all cyclic sampling trials. This implies cyclic sampling can outperform random reshuffling with a well-designed sampling order. However, [16] does not discuss how much better cyclic sampling can outperform random reshuffling and how to achieve such cyclic order. Different from existing literatures, our analysis introduces an order-specific norm to gauge how cyclic sampling performs with different fixed orders. With such norm, we are able to clarify the worst-case and best-case performance of variance reduction with cyclic sampling. Simultaneously and independently, a recent work [18] also provided an improved rates for variance reduction under without-replacement sampling orders that can match with gradient descent. However, [18] does not discuss whether and when variance reduction with replacement sampling can match with uniform sampling. In addition, [18] studies SVRG while this paper studies Finito/MISO. The convergence analyses in these two works are very different. The detailed comparison between this work and [18] can be referred to Sec. 3.3. 1.3 Notations Throughout the paper we let col{x1, · · · , xn} denote a column vector formed by stacking x1, · · · , xn. We let [n] := {1, · · · , n} and define the proximal operator as proxαr(x) := arg min y∈Rd {α r(y) + 1 2 ‖y − x‖2} (2) which is single-valued when r is convex, closed and proper. In general, we say A is an operator and write A : X → Y if A maps each point in space X to another space Y . So A(x) ∈ Y for all x ∈ X . For simplicity, we write Ax = A(x) and A ◦ Bx = A(B(x)) for operator composition. Cyclic sampling. We define π := (π(1), π(2), . . . , π(n)) as an arbitrary determined permutation of sample indexes. The order π is fixed throughout the entire learning process under cyclic sampling. Random reshuffling. When starting each epoch, a random permutation τ := (τ(1), τ(2), ..., τ(n)) is generated to specify the order to take samples. Let τk denote the permutation of the k-th epoch. Algorithm 1 Prox-DFinito Input: z̄0 = 1n n∑ i=1 z0i , step-size α, and θ ∈ (0, 1); for epoch k = 0, 1, 2, · · · do for iteration t = kn+ 1, kn+ 2, · · · , (k + 1)n do xt−1 = proxαr(z̄ t−1); Pick it with some rule; Update ztit and z̄ t according to (4a) and (5); end for z (k+1)n i ← (1− θ)zkni + θz (k+1)n i for any i ∈ [n]; . a damping step z̄(k+1)n ← (1− θ)z̄kn + θz̄(k+1)n; . a damping step end for 2 Proximal Finito/MISO with Damping The proximal gradient method to solve problem (1) is zti = x t−1 − α∇fi(xt−1), ∀ i ∈ [n] (3a) xt = proxαr ( 1 n n∑ i=1 zti ) (3b) To avoid the global average that passes over all samples, we propose to update one zi per iteration: zti = { xt−1 − α∇fi(xt−1), i = it zt−1i , i 6= it (4a) xt = proxαr ( 1 n n∑ i=1 zti ) . (4b) When it is invoked with uniform-iid-sampling and r(x) = 0, algorithm (4a)–(4b) reduces to Finito/MISO [7, 17]. When it is invoked with cyclic sampling and r(x) = 0, algorithm (4a)–(4b) reduces to DIAG [23] and WPG [19]. We let z̄t := 1n ∑n i=1 z t i . The update (4a) yields z̄t = z̄t−1 + (ztit − z t−1 it )/n. (5) This update can be finished with O(d) operations if {zti}ni=1 are stored with O(nd) memory. Furthermore, to increase robustness and simplify the convergence analysis, we impose a damping step to zi and z̄ when each epoch finishes. The proximal damped Finito/MISO method is listed in Algorithm 1. Note that the damping step does not incur additional memory requirements. A more practical implementation of Algorithm 1 is referred to Algorithm 3 in Appendix A. 2.1 Fixed-point recursion reformulation Algorithm (4a)–(4b) can be reformulated into a fixed-point recursion in {zi}ni=1. Such a fixed-point recursion will be utilized throughout the paper. To proceed, we define z = col{z1, · · · , zn} ∈ Rnd and introduce the average operator A : Rnd → Rd as Az = 1n ∑n i=1 zi. We further define the i-th block coordinate operator Ti : Rnd → Rnd as Tiz=col{z1, · · · , (I−α∇fi) ◦ proxαr(Az), · · · , zn} where I denotes the identity mapping. When applying Ti, it is noted that the i-th block coordinate in z is updated while the others remain unchanged. Proposition 1. Prox-DFinito with fixed cyclic sampling order π is equivalent to the following fixed-point recursion (see proof in Appendix B.1.) z(k+1)n = (1− θ)zkn + θTπzkn (6) where Tπ = Tπ(n) ◦ · · · ◦ Tπ(1). Furthermore, variable xt can be recovered by xt = proxαr ◦ Azt, t = 0, 1, 2, · · · (7) Similar result also hold for random reshuffling scenario. Proposition 2. Prox-DFinito with random reshuffling is equivalent to z(k+1)n = (1− θ)zkn + θTτkzkn (8) where Tτk = Tτk(n) ◦ · · · ◦ Tτk(1). Furthermore, variable xt can be recovered by following (7). 2.2 Optimality condition Assume there exists x? that minimizes F (x) + r(x), i.e., 0 ∈ ∇F (x?) + ∂ r(x?). Then the relation between the minimizer x? and the fixed-point z? of recursion (6) and (8) can be characterized as: Proposition 3. x? minimizes F (x)+r(x) if and only if there is z? so that (proof in Appendix B.2) z? = Tiz?, ∀ i ∈ [n], (9) x? = proxαr ◦ Az?. (10) Remark 1. If x? minimizes F (x) + r(x), it holds from (9) and (10) that z?i = (I−α∇fi) ◦ proxαr(Az?) = x? − α∇fi(x?) for any i ∈ [n]. 2.3 An order-specific norm To gauge the influence of different sampling orders, we now introduce an order-specific norm. Definition 1. Given z = col{z1, · · · , zn} ∈ Rnd and a fixed cyclic order π, we define ‖z‖2π = n∑ i=1 i n ‖zπ(i)‖2 = 1 n ‖zπ(1)‖2 + 2 n ‖zπ(2)‖2 + · · ·+ ‖zπ(n)‖2 as the π-specific norm. For two different cyclic orders π and π′, it generally holds that ‖z‖2π 6= ‖z‖2π′ . Note that the coefficients in ‖z‖2π are delicately designed for technical reasons (see Lemma 1 and its proof in the appendix). The order-specific norm facilitates the performance comparison between two orderings. 3 Convergence Analysis In this section we establish the convergence rate of Prox-DFinito with cyclic sampling and random reshuffling in convex and strongly convex scenarios, respectively. 3.1 The convex scenario We first study the convex scenario under the following assumption: Assumption 1 (Convex). Each function fi(x) is convex and L-smooth. It is worth noting that the convergence results on cyclic sampling and random reshuffling for the convex scenario are quite limited except for [22, 33, 5, 18]. Cyclic sampling and shuffling-once. We first introduce the following lemma showing that Tπ is non-expansive with respect to ‖ · ‖π , which is fundamental to the convergence analysis. Lemma 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.1) ‖Tπu− Tπv‖2π ≤ ‖u− v‖2π, ∀u,v ∈ Rnd. (11) Recall (6) that the sequence zkn is generated through z(k+1)n = Sπz(kn). Since Sπ = (1−θ)I+θTπ and Tπ is non-expansive, we can prove the distance ‖z(k+1)n−z(kn)‖2 will converge to 0 sublinearly: Lemma 2. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds for any k = 0, 1, · · · that (see proof in Appendix ‖z(k+1)n − zkn‖2π ≤ θ (k + 1)(1− θ) ‖z0 − z?‖2π (12) where θ ∈ (0, 1) is the damping parameter. With Lemma 2 and the relation between xt and zt in (7), we can establish the convergence rate: Theorem 1. Under Assumption 1, if step-size 0 < α ≤ 2L and the data is sampled with a fixed cyclic order π, it holds that (see proof in Appendix C.3) min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (13) where θ ∈ (0, 1) and C = ( 2 αL )2 log(n)+1 n ‖z 0 − z?‖2π . Remark 2. Inspired by reference [16], one can take expectation over cyclic order π in (13) to obtain the convergence rate of Prox-DFinito shuffled once before training begins (with C =( 2 αL )2 (n+1)(log(n)+1) 2n2 ‖z 0 − z?‖2): E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (14) Random reshuffling. We let τk denote the sampling order used in the k-th epoch. Apparently, τk is a uniformly distributed random variable with n! realizations. With the similar analysis technique, we can also establish the convergence rate under random reshuffling in the expectation sense. Theorem 2. Under Assumption 1, if step-size 0 < α ≤ 2L and data is sampled with random reshuffling, it holds that (see proof in Appendix D.2) E min g∈∂ r(xkn) ‖∇F (xkn)+g‖2≤ CL 2 (k+1)θ(1−θ) (15) where θ ∈ (0, 1) and C = ( 5 3αL )2 1 n‖z 0 − z?‖2. Comparing (15) with (13), it is observed that random reshuffling replaces the constant ‖z0 − z?‖2π by ‖z0 − z?‖2 and removes the log(n) term in the upper bound. 3.2 The strongly convex scenario In this subsection, we study the convergence rate of Prox-DFinito under the following assumption: Assumption 2 (Strongly Convex). Each function fi(x) is µ-strongly convex and L-smooth. Theorem 3. Under Assumption 2, if step-size 0 < α ≤ 2µ+L , it holds that (see proof in Appendix E) (E) ‖xkn − x?‖2 ≤ ( 1− 2θαµL µ+ L )k C (16) where θ ∈ (0, 1) and C= { log(n)+1 n ‖z 0−z?‖2π with π-order cyclic sampling, 1 n‖z 0 − z?‖2 with random reshuffling. Remark 3. Note when θ → 1, Prox-DFinito actually reaches the best performance, so damping is essentially not necessary in strongly convex scenario. 3.3 Comparison with the existing results Recalling ‖z‖2π = ∑n i=1 i n‖zπ(i)‖ 2, it holds that 1 n ‖z‖2 ≤ ‖z‖2π ≤ ‖z‖2, ∀z, π. (17) For a fair comparison with existing works, we consider the worst case performance of cyclic sampling by relaxing ‖z0−z?‖2π to its upper bound ‖z0−z?‖2. Letting α = O(1/L), θ = 1/2 and assuming 1 n‖z 0 − z?‖2 = O(1), the convergence rates derived in Theorems 1–3 reduce to C-Cyclic = Õ ( L2/k ) , C-RR = O ( L2/k ) SC-Cyclic = Õ ( (1− 1/κ)k ) , SC-RR = O ( (1− 1/κ)k ) . where “C” denotes “convex” and “SC” denotes “strongly convex”, κ = L/µ, and Õ(·) hides the log(n) factor. Note that all rates are in the epoch-wise sense. These rates can be translated into the the gradient complexity (equivalent to sample complexity) of Prox-DFinito to reach an -accurate solution. The comparison with existing works are listed in Table 1. Different metrics. Except for [5] and our Prox-DFinito algorithm whose convergence analyses are based on the gradient norm in the convex and smooth scenario, results in other references are based on function value metric (i.e., objective error F (xkn)− F (x?)). The function value metric can imply the gradient norm metric, but not always vice versa. To comapre Prox-DFinito with other established results in the same metric, we have to transform the rates in other references into the gradient norm metric. The comparison is listed in Table 1. When the gradient norm metric is used, we observe that the rates of Prox-DFinito match that with gradient descent, and are state-of-the-art compared to the existing results. However, the rate of Prox-DFinito in terms of the function value is not known yet (this unknown rate may end up being worse than those of the other methods). For the non-smooth scenario, our metric ming∈∂r(x) ‖∇F (x) + ∂r(x)‖2 may not be bounded by the functional suboptimality F (x) + r(x)− F (x?)− r(x?), and hence Prox-DFinito results are not comparable with those in [21, 35, 37, 33, 18]. The results listed in Table 1 are all for the smooth scenario of [21, 35, 37, 33, 18], and we use “Support Prox” to indicate whether the results cover the non-smooth scenario or not. Assumption scope. Except for references [18, 35] and Proximal GD algorithm whose convergence analyses are conducted by assuming the average of each function to be L̄-smooth (and perhaps µ̄-strongly convex), results in other references are based on a stronger assumption that each summand function to be L-smooth (and perhaps µ-strongly convex). Note that L̄ can be much smaller than L sometimes. To compare [18, 35] and Proximal GD with other references under the same assumption, we let each L = L̄ in Table 1. However, it is worth noting that when each Li is drastically different from each other and can be evaluated precisely, the results relying on L̄ (e.g., [35] and [18]) can be much better than the results established in this work. Comparison with GD. It is observed from Table 1 that Prox-DFinito with cyclic sampling or random reshuffling is no-worse than Proximal GD. It is the first no-worse-than-GD result, besides the independent and concurrent work [18], that covers both the non-smooth and the convex scenarios for variance-reduction methods under without-replacement sampling orders. The pioneering work DIAG [23] established a similar result only for smooth and strongly-convex problems2. Comparison with RR/CS methods. Prox-DFinito achieves the nearly state-of-the-art gradient complexity in both convex and strongly convex scenarios (except for the convex and smooth case due to the weaker metric adopted) among known without-replacement stochastic approaches to solving the finite-sum optimization problem (1), see Table 1. In addition, it is worth noting that in Table 1, algorithms of [33, 35, 23] and our Prox-DFinito have an O(nd) memory requirement while others only need O(d) memory. In other words, Prox-DFinito is memory-costly in spite of its superior theoretical convergence rate and sample complexity. Comparison with uniform-iid-sampling methods. It is known that uniform-sampling variancereduction can achieve anO(max{n,L/µ} log(1/ )) sample complexity for strongly convex problems [14, 26, 6] and O(L2/ ) (when using metric E‖∇F (x)‖2) for convex problems [26]. In other words, these uniform-sampling methods have sample complexities that are independent of sample size n. Our achieved results (and other existing results listed in Table 1 and [18]) for random reshuffling or worstcase cyclic sampling cannot match with uniform-sampling yet. However, this paper establishes that Prox-DFinito with the optimal cyclic order, in the highly data-heterogeneous scenario, can achieve an Õ(L2/ ) sample complexity in the convex scenario, which matches with uniform-sampling up to a log(n) factor, see the detailed discussion in Sec. 4. To our best knowledge, it is the first result, at least in certain scenarios, that variance reduction under without-replacement sampling orders can match with its uniform-sampling counterpart in terms of their sample complexity upper bound. Nevertheless, it still remains unclear how to close the gap in sample complexity between variance reduction under without-replacement sampling and uniform sampling in the more general settings (i.e., settings other than highly data-heterogeneous scenario). 2While DIAG is established to outperform gradient descent in [23], we find its convergence rate is still on the same order of GD. Its superiority to GD comes from the constant improvement, not order improvement. 4 Optimal Cyclic Sampling Order Sec.3.3 examines the worst case gradient complexity of Prox-DFinito with cyclic sampling, which is worse than random reshuffling by a factor of log(n) in both convex and strongly convex scenarios. In this section we examine how Prox-DFinito performs with optimal cyclic sampling. 4.1 Optimal cyclic sampling Given sample size n, step-size α, epoch index k, and constants L, µ and θ, it is derived from Theorem 1 that the rate of π-order cyclic sampling is determined by constant ‖z0 − z?‖2π = n∑ i=1 i n ‖z0π(i) − z ? π(i)‖ 2. (18) We define the corresponding optimal cyclic order as follows. Definition 2. An optimal cyclic sampling order π? of Prox-DFinito is defined as π? := arg min π {‖z0 − z?‖2π}. (19) Such an optimal cyclic order can be identified as follows (see proof in Appendix F). Proposition 4. The optimal cyclic order for Prox-DFinito is the reverse order of {‖z0i − z?i ‖2}ni=1. Remark 4 (IMPORTANCE INDICATOR). Proposition 4 implies that ‖z0i − z?i ‖2 can be used as an importance indicator of sample i. Recall z?i = x ? − α∇fi(x?) from Remark 1. If z0i is initialized as 0, the importance indicator of sample i reduces to ‖x? − α∇fi(x?)‖2, which is determined by both x? and∇fi(x?). If z0i is initialized close to x?, we then have ‖z0i − z?i ‖2 ≈ α2‖∇fi(x?)‖2. In other words, the importance of sample i can be measured by ‖∇fi(x?)‖, which is consistent with the importance indicator in uniform-iid-sampling [41, 40]. 4.2 Optimal cyclic sampling can achieve sample-size-independent complexity Recall from Theorem 1 that the sample complexity of Prox-DFinito with cyclic sampling in the convex scenario is determined by (log(n)/n)‖z0 − z?‖2π . From (17) we have 1 n ‖z0 − z?‖2 ≤ ‖z0 − z?‖2π ≤ ‖z0 − z?‖2, ∀z, π. (20) In Sec. 3.3 we considered the worst case performance of cyclic sampling, i.e., we bound ‖z0 − z?‖2π with its upper bound ‖z0 − z?‖2. In this section, we will examine the best case performance using the lower bound ‖z0 − z?‖2/n, and provide a scenario in which such best case performance is achievable. We assume ‖z0 − z?‖2/n = O(1) as in previous sections. Proposition 5. Given fixed constants n, α, k, θ, L, and optimal cyclic order π?, if the condition ρ := ‖z0 − z?‖2π? ‖z0 − z?‖2 = O ( 1 n ) (21) holds, then Prox-DFinito with optimal cyclic sampling achieves sample complexity Õ(L2/ ). The above proposition can be proved by directly substituting (21) into Theorem 1. In the following, we discuss a data-heterogeneous scenario in which relation (21) holds. A data-heterogeneous scenario. To this end, we let x? = col{x?, · · · , x?} and ∇f(x?) = col{∇f1(x?), · · · ,∇fn(x?)}, it follows from Remark 1 that z? = x? − α∇f(x?). If we set z0 = 0 (which is common in the implementation) and α = 1/L (the theoretically suggested step-size), it then holds that ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2. Next, we assume ‖z0i − z?i ‖2 = ‖x? − ∇fi(x?)/L‖2 = nβi−1 (0 < β < 1) holds. Under such assumption, the optimal cyclic order will be π? = (1, 2, · · · , n). Now we examine ‖z0 − z?‖2π? and ‖z0 − z?‖2: n∑ i=1 ‖z0i − z?i ‖2 = n n∑ i=1 βi−1 ≈ n 1− β , n∑ i=1 i n ‖z0i − z?i ‖2 = n∑ i=1 iβi−1 ≈ 1 (1− β)2 when n is large, which implies that ρ = ‖z0 − z?‖2π?/‖z0 − z?‖2 = O(1/n) since β is a constant independent of n. With Proposition 5, we know Prox-DFinito with optimal cyclic sampling can achieve Õ(L2/ ), which is independent of sample size n. Note that ‖∇fi(x?)‖2 = nβi−1 implies a data-heterogeneous scenario where β can roughly gauge the variety of data samples. 4.3 Adaptive importance reshuffling Algorithm 2 Adaptive Importance Reshuffling Initialize: w0(i) = ‖z0i − z̄0‖2 for i ∈ [n]; for epoch k = 0, 1, 2, · · · do Reshuffle [n] based on the vector wk; Update a Prox-DFinito epoch; Update wk+1 according to (22); end for The optimal cyclic order decided by Proposition 4 is not practical since the importance indicator of each sample depends on the unknown z?i = x?−α∇fi(x?). This problem can be overcome by replacing z?i by its estimate z kn i , which leads to an adaptive importance reshuffling strategy. We introduce w ∈ Rn as an importance indicating vector with each element wi indicating the importance of sample i and initialized as w0(i) = ‖z0i − z̄0‖2, ∀ i ∈ [n]. In the k-th epoch, we draw sample i earlier if wk(i) is larger. After the k-th epoch, w will be updated as wk+1(i) = (1− γ)wk(i) + γ‖z0i − z (k+1)n i ‖ 2, (22) where i ∈ [n] and γ ∈ (0, 1) is a fixed damping parameter. Suppose zkni → z?i , the above recursion will guarantee wk(i)→ ‖z0i − z?i ‖2. In other words, the order decided by wk will gradually adapt to the optimal cyclic order as k increases. Since the order decided by importance changes from epoch to epoch, we call this approach adaptive importance reshuffling and list it in Algorithm 2. We provide the convergence guarantees of the adaptive importance reshuffling method in Appendix G. 5 Numerical Experiments 5.1 Comparison with SVRG and SAGA under without-replacement sampling orders In this experiment, we compare DFinito with SVRG [14] and SAGA [7] under without-replacement sampling (RR, cyclic sampling). We consider a similar setting as in [18, Figure 2], where all step sizes are chosen as the theoretically optimal one, see Table 2 in Appendix H. We run experiments for the regularized logistic regression problem, i.e. problem (1) with fi(x) = log (1 + exp(−yi〈wi, x〉)) + λ 2 ‖x‖ 2 with three widely-used datasets: CIFAR-10 [15], MNIST [8], and COVTYPE [29]. This problem is L-smooth and µ-strongly convex with L = 14nλmax(W TW )+λ and µ = λ. From Figure 1, it is observed that DFinito outperforms SVRG and SAGA in terms of gradient complexity under without-replacement sampling orders with their best-known theoretical rates. The comparison with SVRG and SAGA with the practically optimal step sizes is in Appendix J. 5.2 DFinito with cyclic sampling Justification of the optimal cyclic sampling order. To justify the optimal cyclic sampling order π? suggested in Proposition 4, we test DFinito with eight arbitrarily-selected cyclic orders, and compare them with the optimal cyclic ordering π? as well as the adaptive importance reshuffling method (Algorithm 2). To make the comparison distinguishable, we construct a least square problem with heterogeneous data samples with n = 200, d = 50, L = 100, µ = 10−2 (see Appendix I for the constructed problem). The constructed problem is with ρ = ‖z0−z?‖2π∗/‖z0−z?‖22 = 0.006 when z0i = 0, x 0 = 0, and α = 13L , which is close to 1/n = 0.005. In the left plot in Fig. 2, it is observed that the optimal cyclic sampling achieves the fastest convergence rate. Furthermore, the adaptive shuffling method can match with the optimal cyclic ordering. These observations are consistent with our theoretical results derived in Sec. 4.2 and 4.3. Optimal cyclic sampling can achieve sample-size-independent complexity. It is established in [26] that Finito with uniform-iid-sampling can achieve n-independent gradient complexity with α = n8L . In this experiment, we compare DFinito (α = 2 L ) with Finito under uniform sampling (8 runs, α = n8L ) in a convex and highly heterogeneous scenario (ρ = O( 1 n )). The constructed problem is with n = 500, d = 20, L = 0.3, θ = 0.5 and ‖z0i −z?i ‖ = 10000∗0.1i−1, 1 ≤ i ≤ n (see detailed initialization in Appendix J). We also depict DFinito with random-reshuffling (8 runs) as another baseline. In the right plot of Figure 2, it is observed that the convergence curve of DFinito with π?-cyclic sampling matches with Finito with uniform sampling. This implies DFinito can achieve the same n-independent gradient complexity as Finito with uniform sampling. 5.3 More experiments We conduct more experiments in Appendix J. First, we compare DFinito with GD/SGD to justify its empirical superiority to these methods. Second, we validate how different data heterogeneity will influence optimal cyclic sampling. Third, we examine the performance of SVRG, SAGA, and DFinito under without/with-replacement sampling using grid-search (not theoretical) step sizes. 6 Conclusion and Discussion This paper develops Prox-DFinito and analyzes its convergence rate under without-replacement sampling in both convex and strongly convex scenarios. Our derived rates are state-of-the-art compared to existing results. In particular, this paper derives the best-case convergence rate for ProxDFinito with cyclic sampling, which can be sample-size-independent in the highly data-heterogeneous scenario. A future direction is to close the gap in gradient complexity between variance reduction under without-replacement and uniform-iid-sampling in the more general setting.
1. What is the main contribution of the paper regarding stochastic variance reduced methods? 2. What are the strengths and weaknesses of the proposed method, particularly in comparison to other works in the field? 3. How does the paper address the issue of data heterogeneity, and what is the significance of this aspect? 4. Are there any concerns or typos in the paper, especially in the proof of the theorem? 5. How does the proposed method differ from others in terms of memory cost and total complexity?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors study a stochastic variance reduced method under sampling rules that are without replacement for smooth and non-smooth, convex and strongly convex objectives. More precisely : they develop a proximal method call Prox-DFinito for which they study convergence rates for random reshuffling, cyclic and shuffling once samplings in the cyclic sampling, they derive an optimal fixed ordering (and a practical adaptive variant which do not require the knowledge of z ∗ Finally, the authors highlight the fact that, when assuming data heterogeneity, their study lead to a convergence rate independent of the number of data samples. This was up until now just known for iid sampling. Review First of all, the paper is very well written (especially the background detailing precious works), very very few typos, main proofs seem correct. Minor comments which are just my opinion: I do not find that "generally convex" is a proper way to describe a function which is just "convex". It's not standard and reading it quickly might make the reader think it's more general than convexity. Lines 37 and 39 : i would not start a sentence with a ref, I would write "Authors of [XX] establish ..." Prop. 1 & 3: I would not put "(proof in Appendix XX)" inside the proposition but before or after. Minor/medium importance comments: Could you precise in Table 1 that you assume that the f i 's are strongly convex (not F ), it makes easier the comparison with [18] Eq. (13) : I don't know what is the squared norm of a subdifferential. I think it's not standard. I would rewrite this to make it more mathematically correct if you meant, "the squared norm of all elements in the subdifferential". Remark 3: typo "One can takING" -> "take" maybe it is worth citing again [16] in Remark 3. line 154: why "Apparently" ? It makes the sentence look potentially false line 232, 2 typos: == and a word is missing before the parenthesis line 242, typo : written ∇ f ( x ∗ ) , it should be ∇ f i ( x ∗ ) instead Major comments: Could you clearly explain and highlight the fact that Table 1 is comparing number of gradients evaluations (total complexity) to reach ϵ -solutions, it makes easier the comparison with Table 1 in[18] I find that the rate of [18] are missing in table 1. Even if an entire section in the appendix describe the differences, we need a more detailed comparison in the main text instead of the short 3 lines at the end of subsection 1.2. Giving the pros and cons of each method (and clearly stating the difference in memory O ( n d ) (yours) vs O ( d ) ([18])). Lines 98-100: It's not clearly explained how you came up with this damping parameter. Why do you need it ? Major comments on Appendix K: You do not discuss the "Big Data regime" + strong convexity for which the authors of [18] clearly have a gain : in their terms they get an iteration complexity of O ( L μ log ⁡ ( 1 ϵ ) ) . This assumption lead to the same total complexity than yours for RR and worst cyclic order : O ( n L μ log ⁡ ( 1 ϵ ) ) right ? Could you precise that your memory cost is n times the one for SVRG ? Questions : Did you come with the idea of this "order-specific norm". If not, could you please add a reference ? I was not aware of the "data heterogeneity assumption" before reading this paper. Where does it come from ? Could you add ref about this ? Is it met in practice ? What is the link between the assumption and the heterogeneity of the data ? Proof of theorem 1, line 465: Why is z π ( j ) k n + j − 1 = z π ( j ) k n , ∀ j = 1 , … , n true ? For instance, if π ( j ) = j and k = 0 , then it would mean that z j j − 1 = z j 0 which is not necessarily the case if the index sampled at step j − 1 is j. Maybe I'm wrong on this one, but let me know if there is an easy way to see this.
NIPS
Title Meta-learning to Improve Pre-training Abstract Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%. 1 Introduction A popular and important learning paradigm for neural networks is pre-training (PT) followed by finetuning (FT), an approach commonly used in transfer learning [13, 59, 19, 27, 52, 11, 37, 74, 35, 28], and semi-supervised learning [9, 8, 24]. This paradigm has led to performance improvements in many domains, including computer vision [13, 59, 19, 37, 74, 35], natural language processing [27, 52, 11, 40, 34], graph structured prediction [28], and clinical machine learning [45, 46, 2, 48], and is especially helpful in settings where downstream tasks have limited training data. The PT & FT paradigm introduces high-dimensional, complex PT hyperparameters, such as parameterized data augmentation policies used in contrastive representation learning [8, 22] or the use of task, class, or instance weighting variables in multi-task PT to avoid negative transfer [70]. These hyperparameters can significantly affect the quality of pre-trained models [8], and thus finding techniques to set their values optimally is an important area of research. Choosing optimal PT hyperparameter values is challenging, and existing methods do not work well. Simple approaches such as random or grid search are inefficient since evaluating a hyperparameter setting requires performing the full, two-stage PT & FT optimization, which may be prohibitively computationally expensive. Gradient-free approaches, such as Bayesian optimization or evolutionary algorithms [33, 61, 47], are also limited in how well they scale to this setting. Gradient-based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). approaches [44, 41, 43, 42] can be used online to jointly learn hyperparameters and model parameters and can scale to millions of hyperparameters [42], but typically deal with a standard single-stage learning problem (e.g., normal supervised learning) and are therefore not directly applicable to the two-stage PT & FT learning problem. In this work, we address this gap and propose a method for high-dimensional PT hyperparameter optimization. We first formalize a variant of the PT & FT paradigm, which we call meta-parameterized pre-training (Figure 1), where meta-parameters refer to arbitrary PT hyperparameters or parameterizable architectural choices that can be optimized to improve the learned representations.1 We outline a meta-learning problem characterizing the optimal meta-parameters propose a gradient-based method to learn meta-parameters. Our contributions are: • We formalize meta-parameterized pre-training, a variant of the pre-training and fine-tuning (PT & FT) paradigm where PT is augmented to incorporate meta-parameters: arbitrary structures that can be optimized to improve learned representations. • We propose a scalable gradient-based algorithm to learn meta-parameters using a novel method to obtain meta-parameter gradients through the two-stage PT & FT process. Our gradient estimator composes a constant-memory implicit differentiation approximation for the longer PT stage and exact backpropagation through training for the shorter FT stage. • We show that our algorithm recovers optimal meta-parameters in toy experiments on synthetic data. • In two real-world experimental domains, we demonstrate our algorithm improves performance. Firstly, on a multitask PT benchmark over biological graph-structured data [28], using our method to optimize meta-parameters representing task weights improves performance by up to 3.9% AUROC. Secondly, for semi-supervised learning using SimCLR [8] over electrocardiography data, using our algorithm to optimize meta-parameters representing the weights of a data augmentation neural network improves performance by up to 1.9% AUROC. 2 Problem Setup and Preliminaries In this section, we define the meta-parameterized pre-training meta-learning problem, and compare it to traditional fine-tuning and pre-training. A full glossary of notation is in Appendix B, Table 3. Notation. Let the subscript • be a placeholder for either PT (pre-training) or FT (fine-tuning), X ⊆ Rd be our input domain, Y• and Ŷ• be the true and predicted output spaces for some model respectively, and Θ,Ψ•,Φ be spaces of parameters for models. We will use f• : X ; (Θ,Ψ•)→ Ŷ• to refer to a parametric model, with the semicolon separating the input space from the parameter spaces. We then define f• = f (head) • ◦ f (feat), such that f (feat)(·;θ ∈ Θ) is a feature extractor that is transferable across learning stages (e.g., pre-training to fine-tuning), and f (head)• (·;ψ ∈ Ψ•) is a stage-specific head that is not transferable. Given a data distribution x•, y• ∼ D•, parametric model f•, and loss function L• : Ŷ• × Y• → R, we will also define for convenience a corresponding expected loss L• : Θ,Ψ• → R via L•(θ,ψ•;D•) = ED• [L•(f•(x•;θ,ψ•), y•)]. We also adopt the convention that the output of the argmin operator is any arbitrary minimum, rather than the set of possible minima, to avoid complications in notation. 2.1 Problem Formulation Supervised Learning (Fig. 1A). In a fully-supervised setting (our fine-tuning domain), we are given a data distribution DFT, model f , and loss LFT. Using a learning algorithm AlgFT (e.g., SGD) that takes as input initial parameters θ(0)FT ,ψ (0) FT , our goal is to approximate the LFT-optimal parameters: θ∗FT,ψ ∗ FT = AlgFT(θ (0) FT ,ψ (0) FT ;DFT) ≈ argminθ∈Θ,ψ∈ΨFT LFT(θ,ψ;DFT) Pre-training (Fig. 1B). For tasks where data is scarce, we can additionally incorporate a pretraining step and approximate the optimal initial parameters for FT (i.e., the final pre-trained weights are used as initialization weights of the FT stage), again via an optimization algorithm AlgPT: θ∗PT = AlgPT(θ (0) PT ,ψ (0) PT ;DPT) ≈ argminθ∈Θ LFT(AlgFT(θ,ψ (0) FT ;DFT);DFT). 2 1We use the term meta-parameter since these structures do not directly affect inference of the final model after FT, but instead inform the process of learning this model (by modulating the PT process). 2Note that we discard the PT head ψ∗PT here as only the PT feature extractor θ ∗ PT is transferred. Figure (1) Meta-Parameterized Pre-Training. A paradigm where meta-parameters — rich, potentially high dimensional structures that generalize PT hyperparameters — are incorporated in PT to improve the learned representations. Meta-parameters are optimized in a meta-PT phase, using data from FT task(s) in a meta-FT dataset. The FT and meta-FT datasets are (potentially overlapping) samples from the FT data distribution. Meta-Parameterized PT (Fig. 1C). In Meta-Parameterized PT, we recognize that, in addition to taking as input the PT parameters θ, AlgPT is itself parameterized by a set of meta-parameters φ ∈ Φ: arbitrary, potentially high dimensional quantities that inform the structure of the algorithm directly. These could represent weighting strategies, data augmentation policies, or sampling processes. The optimal meta-parameters φ(opt) are the solution to the following meta-PT optimization problem: φ(opt) = argmin φ∈Φ LFT ( AlgFT ( AlgPT ( θ (0) PT ,ψ (0) PT ;DPT,φ ) ,ψ (0) FT ;DFT ) ;DFT ) . 2.2 Example: Multitask Meta-Parameterized Pre-Training To make our notation concrete, here we instantiate our setup for a multitask pre-training problem. Problem: Suppose we have a multitask classification dataset, (X × Y)N such that Y = Y1 × · · · × YK consists of labels for K distinct tasks. Of this full set of tasks, we are interested only in a subset of M tasks, S = {t1, . . . , tM} ⊆ {1, . . . ,K}. Supervised FT: Under supervised FT alone, we can directly average a cross-entropy loss LCE over only the tasks in S, LFT(ŷ,y) = 1M ∑M j=1 LCE(ŷ(tj), y(tj)), and then solve this problem via SGD. PT: If we assume that S is a random subset of the full set of tasks, we can introduce a PT stage over all tasks: LPT(ŷ,y) = 1K ∑K i=1 LCE(ŷ(i), y(i)), followed by FT on S alone. As S is a random subset, leveraging all tasks for PT is well motivated and may improve performance. Meta-Parameterized PT: In the case where T is not a random subset, the PT strategy described above is no longer well-motivated. However, using meta-parameterized PT, we can still effectively pre-train by introducing the meta-parameters that weight the tasks φ = [φ1 . . . φK ] and modulate the loss function LPT: LPT(ŷ,y;φ) = ∑K i=1 φiLCE(ŷ(i), yi). With optimal meta-parameters φ (opt), the PT stage will leverage only that subset of tasks that best informs the final FT performance. This setting mirrors our real-world experiment in Section 5. 3 Methods: Optimizing Meta-Parameters for Two-Stage Training We now introduce our gradient-based algorithm to optimize meta-parameters. We first describe how to efficiently approximate meta-parameter gradients through the two-stage PT and FT optimization. We then present our algorithm, and outline practical considerations when using it. 3.1 Efficient Computation of Meta-Parameter Gradients We begin by defining: g(φ;θ (0) PT ,ψ (0) PT ,ψ (0) FT ) = LFT ( AlgFT ( Parameter θPT︷ ︸︸ ︷ AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ),ψ (0) FT ;DFT )︸ ︷︷ ︸ Parameters θFT,ψFT ;DFT ) , (1) so that φ(opt) = argminφ∈Φ g(φ). We also define two best-response values: θ∗PT(φ) = AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ), θ∗FT(φ), ψ ∗ FT(φ) = AlgFT(θ ∗ PT(φ),ψ (0) FT ;DFT). We do not explicitly include the dependence of the best responses on the initialization values for notational convenience. With these defined, we now consider the desired gradient term, ∂g∂φ . Under our definitions, the direct partial derivatives ∂LFT∂φ and ∂AlgFT ∂φ are zero, so ∂g ∂φ reduces to a simple expression of the chain rule: ∂g ∂φ ∣∣∣∣ φ′ = ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ∗FT(φ ′),ψ∗FT(φ ′)︸ ︷︷ ︸ FT Loss Gradient × FT Best Response Jacobian︷ ︸︸ ︷ ∂AlgFT ∂θPT ∣∣∣∣ θ∗PT(φ ′) × ∂AlgPT ∂φ ∣∣∣∣ φ′︸ ︷︷ ︸ PT Best Response Jacobian . (2) The FT Loss Gradient term on the RHS of (2) is easily computed using backpropagation. Computing the other two terms is more involved, and we detail each below, beginning with the PT best response Jacobian. The full algorithm with both gradient estimation terms is provided in Algorithm 1. PT Best Response Jacobian ∂AlgPT∂φ . Using recent work in hyperparameter optimization with implicit differentiation [42], we re-express this term using the implicit function theorem (IFT). If we assume that θ∗PT(φ) = AlgPT ( θ (0) PT ;DPT,φ ) is a good approximation of argminθ∈Θ LPT (θ;DPT,φ) (i.e., the PT model converges to LPT-optimal parameters), then under certain smoothness and regularity assumptions on the PT parameters and meta-parameters, the IFT allows us to re-express ∂AlgPT∂φ as: ∂AlgPT ∂φ ∣∣∣∣ φ′ = − [ ∂2LPT ∂θPT ∂θ>PT ]−1 × ∂ 2LPT ∂θPT ∂φ > ∣∣∣∣ θ∗PT(φ ′),φ′ , (3) which is the product of the inverse Hessian and a matrix of mixed partial derivatives. Following [42], the inverse can be efficiently approximated using a truncated Neumann series. FT Best Response Jacobian ∂AlgFT∂θPT . First, note that without additional constraints on AlgFT, the FT best response Jacobian may be zero. This is because LFT has no functional dependence on the variable θPT and, if we assume the convergence point θ∗FT is stable (as we did for the PT best response Jacobian), this implies that the gradient of θ∗FT with respect to θPT would be zero. To enable effective learning, we must therefore either (1) impose restrictions on AlgFT to ensure there is a dependence between the initialization point and the final loss value (e.g., proximal regularization [55]) or (2) leverage methods that do not differentiate through AlgFT through convergence, as at non-converged points we will still observe nonzero LFT-gradients [29, 51]. Given that the FT phase often involves shorter optimization horizons than PT, we take approach 2 here, and iteratively update θFT for K steps. We first initialize the FT head ψ(0)FT and then compute: θ (0) FT = copy(θ ∗ PT) (init with PT solution, implicitly performing stop gradient) θ (k) FT ,ψ (k) FT = [ θ (k−1) FT , ψ (k−1) FT ] − ηFT ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ (k−1) FT ,ψ (k−1) FT k = 1, . . . ,K θ∗FT,ψ ∗ FT ≈ θ (K) FT ,ψ (K) FT , (4) and compute the gradient ∂AlgFT∂θPT ∣∣∣ θ∗PT(φ ′) by differentiating through this optimization.3 We can also choose to freeze the feature extractor parameters θFT and update only the head parameters ψFT during truncated FT, and use this to obtain meta-parameter gradients. This resembles linear evaluation, where a linear classifier is trained on top of fixed, pre-trained feature extractors [50, 3, 63]. Together, these two approximations allow for efficient computation of meta-parameter gradients. 3While Equation 4 uses standard gradient descent, we could use other differentiable optimizers (e.g., Adam). Algorithm 1 Gradient-based algorithm to learn meta-parameters. Notation defined in Appendix B, Table 3. Vector-Jacobian products (VJPs) can be efficiently computed by standard autodifferentiation. 1: Initialize PT parameters θ(init)PT ,ψ (init) PT ,ψ (0) FT and meta-parameters φ (0) 2: for n = 1, . . . , N iterations do 3: Initialize θ(0)PT = θ (init) PT and ψ (0) PT = ψ (init) PT . 4: for p = 1, . . . , P PT iterations do 5: [ θ (p) PT ,ψ (p) PT ] = [ θ (p−1) PT ,ψ (p−1) PT ] − ηPT ∂LPT ∂[θPT,ψPT] ∣∣∣∣ θ (p−1) PT ,ψ (p−1) PT 6: end for 7: Initialize FT encoder with PT solution: θ(0)FT = copy(θ (P ) PT ). 8: Approximate θ∗FT,ψ ∗ FT using Eq. 4. 9: Compute g1 = ∂LFT ∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT 10: Compute VJP g2 = g1 ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT using the unrolled learning step from line 8. 11: Approximate VJP ∂g∂φ ∣∣∣ φ(n−1) = g2 ∂AlgPT ∂φ ∣∣∣ φ(n−1) using the IFT (Eq. 3). 12: φ(n) = φ(n−1) − ηV ∂g∂φ ∣∣∣ φ(n−1) 13: Update PT initialization by setting: θ(init)PT = θ (P ) PT and ψ (init) PT = ψ (P ) PT . 14: end for 3.2 Our Algorithm and Practical Considerations By leveraging the above approximations, we obtain Algorithm 1 to optimize meta-parameters φ online during PT & FT of the base model. Note that AlgPT is explicitly written out as a sequence of gradient updates (lines 4-6 in Algorithm 1). We now discuss practical considerations when using this algorithm, with further details given in Appendix C. (1) Access to DFT and generalizing to new FT tasks: Solving the meta-PT problem requires availability of: the model f•, the PT data DPT, and the FT data DFT. In this work, we assume availability of the model and PT dataset, but since assuming access to the complete FT dataset at meta-PT time is more restrictive, we study two scenarios: Full FT Access, where all FT data that we expect to encounter is available at meta-PT time, and Partial FT Access, where the FT data available at meta-PT time is only a sample from a distribution of FT data that we may encounter later. Full FT Access occurs in settings like semi-supervised learning, where we are given a large unlabelled PT dataset and a small labelled FT dataset and our goal is to achieve the best possible performance by leveraging these two fixed datasets [68, 73, 25, 24, 8, 9]. Partial FT Access occurs when our goal is to learn transferable representations: at meta-PT time, we might have limited knowledge of FT tasks or data. In evaluating this scenario, we examine generalizability to new FT tasks, given only small amounts of FT data/task availability at meta-PT time, demonstrating that even very limited FT access can be sufficient for effective meta-parameter optimization [11, 45, 56, 28]. (2) DFT splits: In practice, we have access to finite datasets and use minibatches, rather than true datagenerating processes. Following standard convention, we splitDFT into two subsets for meta-learning: D(tr)FT and D (val) FT (independent of any held-out DFT testing split), and define the FT data available at meta-PT time as D(Meta)FT = D (tr) FT ∪ D (val) FT . We use D (tr) FT for the computation of ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT and ∂AlgPT ∂φ ∣∣∣ φ(n−1) and D(val)FT for the computation of ∂LFT∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT in Algorithm 1. (3) Online updates: Given that PT phases often involve long optimization horizons, for computational efficiency, we update θPT andψPT online rather than re-initializing them at every meta-iteration (see Algorithm 1). FT phases are often shorter so we could in theory re-initialize ψFT at each meta-iteration, as is presented in Algorithm 1. However, it is more computationally efficient to also optimize this online, and we follow this approach in our experiments. A description of the algorithm with these details in Appendix C. Note that prior work [67] has suggested that online optimization of certain hyperparameters (e.g., learning rates) using short horizons may yield suboptimal solutions. We comment on this in Appendix C, study this effect for our algorithm in synthetic experiments in Appendix E, and in real-world experiments on self-supervised learning in Appendix G, revealing it is not a significant concern. (4) Computational tractability: Our method can scale to large encoder models and highdimensional meta-parameters, despite the complexity of the two-stage PT & FT process. This is because: (i) meta-parameters are optimized jointly with the base model parameters; (ii) using the IFT to obtain gradients has similar time and memory complexity to one iteration of training [42]; (iii) the FT best response Jacobian can be approximated efficiently using a small number of unrolled optimization steps K, and by only unrolling the FT head of the network. In our real-world experiments (Sections 5 and 6), meta-parameterized PT has less than twice the time cost of standard PT. Further details on time and memory cost are provided in Appendices F and G. (5) Setting optimizer parameters: Learning rates and momentum values can impact the efficacy of the algorithm. A discussion on how to set them in practice is provided in Appendix D. 4 Synthetic Experiments We validate that our algorithm recovers optimal low and high dimensional meta-parameters in two synthetic MNIST experiments with Full FT Access. Further details and results are provided in Appendix E, including a study of how our method performs comparably to differentiating exactly through the entire learning process of PT & FT, without approximations. First, we optimize low dimensional meta-parameters characterizing a data augmentation scheme. We tune a 1-D meta-parameter φ representing the mean of a Normal distribution N (φ, 12) from which we sample rotation augmentations to apply to PT images. FT images undergo rotations from a Normal distribution N (µFT, 12) with µFT = 90◦; we therefore expect that φ should converge to near µFT. Using Algorithm 1 to optimize φ we find that the mean error in the optimized meta-parameter over 10 different initializations is small: 7.2± 1.5◦, indicating efficacy of the algorithm. Next, we consider learning high dimensional meta-parameters that characterize a PT per-example weighting scheme. The PT dataset contains some examples that have noisy labels, and FT examples all have clean labels. The meta-parameters are the parameters of a neural network that assigns importance weights to each PT example, which is used to weight the loss on that example during PT. We use Algorithm 1 again to optimize φ, over 10 random initializations, finding the ratio of assigned importance weights between clean label PT examples and noisy label PT examples is greater than 102. This is expected since the noisy label classes may worsen the quality of the PT model and so should be down-weighted. 5 Meta-Parameterized Multitask Pre-Training for Graph Neural Networks We consider optimizing PT task weights for a multitask PT & FT problem of predicting the presence of protein functions (multitask binary classification) given graph-structured biological data as input. We have two experimental goals: first, in the Full FT Access setting, where methods are given access to all FT data at PT time, we evaluate whether optimizing task weighting meta-parameters can improve predictive performance on the FT tasks. Second, motivated by how in typical transfer learning problems, new tasks or labels not available at PT time may become available at FT time, we study the Partial FT Access setting, investigating how our method performs when it only sees limited FT tasks at PT time. In both settings, our method outperforms baselines. 5.1 Problem Setup Dataset and Task. We consider the transfer learning benchmark introduced in [28], where the prediction problem at both PT and FT is multitask binary classification: predicting the presence/absence of specific protein functions (y) given a Protein-Protein Interaction (PPI) network as input (rep- resented as a graph x). The PT dataset has pairs DPT = {(xi, yi)}|DPT|i=1 , where y ∈ {0, 1}5000 characterizes the presence/absence of 5000 particular protein functions. The FT dataset has pairs DFT = {(xi, yi)}|DFT|i=1 , where y ∈ {0, 1}40 now characterizes the presence/absence of 40 different protein functions. Further dataset details in Appendix F. Meta-Parameterized Multitask PT. To define a meta-parameterized PT scheme, we let metaparameters φ ∈ R5000 be weights for the binary PT tasks. Then, we define a PT loss incorporating the weights: LPT = 15000 ∑5000 i=1 2 σ(φi) LCE(fPT(x;θPT,ψPT)i, yi),with i indexing the tasks, σ(·) representing the sigmoid function (to ensure non-negativity and clamp the range of the weights), and LCE denoting the binary cross-entropy loss. With this loss defined, we use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [28] for computing the relevant gradient terms. Baselines. Motivated by our goals, we compare with the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • Graph Supervised PT: As explored in prior work on this domain [28], perform multitask super- vised PT with DPT. This corresponds to setting all task weights to 1: φi = 1, i = 1, . . . , 5000. • CoTrain: A common baseline that makes use of the FT data available during PT [70] (like meta- parameterized PT). We PT a model with 5000+40 outputs (covering the space of PT and FT labels) jointly on both DPT and DFT. We do so by alternating gradient updates on batches sampled from each dataset in turn. Further details are in Appendix F. • CoTrain + PCGrad: An extension of CoTrain, where we leverage the method PCGrad [72] to perform gradient projection and prevent destructive gradient interference between updates from DPT and DFT. Further details and variants we tried are in Appendix F. Experimental Details. We use a standardized setup to facilitate comparisons. Following [28], all methods use the Graph Isomorphism Network architecture [69], undergo PT for 100 epochs, and FT for 50 epochs, over 5 random seeds, using early stopping based on validation set performance. During FT, we initialize a new FT network head and either FT the whole network or freeze the PT feature extractor and learn the FT head alone (Linear Evaluation [50]). We report results for the strategy that performed best (full results in the appendix). We consider two experimental scenarios: (1) Full FT Access: Provide methods full access to DPT and DFT at PT time (D(Meta)FT = DFT) and evaluate on the full set of 40 FT tasks; (2) Partial FT Access: Limit the number of FT tasks seen at PT time, by letting D(Meta)FT include only 30 of the 40 FT tasks. At FT time, models are fine-tuned on the held-out 10 tasks not in D(Meta)FT . We use a 4-fold approach where we leave out 10 of the 40 FT tasks in turn, and examine performance across these 10 held-out tasks, over the folds. 5.2 Results Key Findings. By optimizing PT task weights, meta-parameterized multitask PT improves performance on the FT problem of predicting presence/absence of protein functions given a protein-protein interaction graph as input. Performance improvements are also seen when generalizing to new FT tasks (protein functions), unseen at meta-PT time. Table 1 presents quantitative results for the two experimental settings described. For the No PT and Graph Supervised PT baselines, we re-implement the methods from [28], obtaining improved results (full comparison in Appendix Table 5). In both full and partial FT access settings, meta-parameterized PT improves significantly on other methods, indicating that optimizing meta-parameters can improve predictive performance generally, and be effective even when new, related tasks are considered at evaluation time. Interestingly, we observe that CoTrain and CoTrain + PCGrad obtain relatively poor performance compared to other baselines; this could be because the methods overfit to the FT data during PT. Further analysis of this is presented in Appendix F. Further experiments. In Appendix F, we study another partial FT access scenario with smaller D(Meta)FT , setting ∣∣∣D(Meta)FT ∣∣∣ = 0.5 |DFT|, and find that meta-parameterized PT again outperforms other methods. (Table 7). We also examine another meta-parameter learning baseline, namely a version of CoTrain where we optimize task weights using a traditional hyperparameter optimization algorithm [42] jointly with the main model. We find that our method outperforms this baseline also (Table 5). Method AUC (D(Meta)FT = DFT) AUC (D (Meta) FT excludes tasks) No PT 66.6 ± 0.7 65.8 ± 2.5 Graph Supervised PT 74.7 ± 0.1 74.8 ± 1.8 CoTrain 70.2 ± 0.3 69.3 ± 1.8 CoTrain + PCGrad 69.4 ± 0.2 68.1 ± 2.3 Meta-Parameterized PT 78.6 ± 0.1 77.0 ± 1.3 Table (1) Meta-Parameterized PT improves predictive performance over baselines. Table showing mean AUC and standard error for two evaluation settings. When provided all FT data at PT time (first results column), meta-parameterized PT significantly improves predictive performance. In a more challenging setting when D(Meta)FT excludes FT tasks (10 of the 40 available tasks are held-out), evaluating mean AUC/standard error across four folds with each set of 10 FT tasks held out in turn, meta-parameterized PT again obtains the best performance: it is effective even with partial information about the downstream FT tasks. Analysis of learned structures. In Appendix F, we conduct further analysis and study the effect of various PT strategies on the pre-trained representations (Figure 3), finding intuitive patterns of similarity between different methods. We also examine the learned task weights (Figure 4), and examine performance on a per-FT task basis with/without meta-parameterized PT (Figure 5), finding little evidence of negative transfer. 6 Meta-Parameterized SimCLR for Semi-Supervised Learning with ECGs We now explore a second real-world application of our method: optimizing a data augmentation policy for self-supervised PT with SimCLR [8, 9] on electrocardiograms (ECGs). SimCLR is a popular self-supervised PT method that leverages data augmentations to define a contrastive PT objective (details in Appendix G.1). The choice/strength of the augmentations used significantly impacts the effectiveness of the algorithm [8]. In settings where relevant augmentations are known (e.g., natural images), SimCLR is readily applicable; however, for ECGs, effective augmentations are less clear, motivating the use of our algorithm to optimize the augmentation pipeline. We have two experimental goals. Firstly, we examine the typical semi-supervised learning setting of Full FT Access: we explore whether optimizing the augmentations in SimCLR PT can improve performance on the supervised FT task of detecting pathologies from ECGs, given access to all FT data at meta-PT time. Secondly, to study the data efficiency of our method, we consider the Partial FT Access setting and explore performance given access to limited FT data at meta-PT time. We find that our method improves the performance of SimCLR, and that it is effective even with very limited amounts of FT data provided at meta-PT time. 6.1 Problem Setup Dataset and Task. We construct a semi-supervised learning (SSL) problem using PTB-XL [64, 20], an open-source dataset of electrocardiogram (ECG) data. Let the model input at both PT and FT time be denoted by x, which represents a 12-lead (or channel) ECG sampled at 100 Hz for 10 seconds resulting in a 1000 × 12 signal. Our goal is to pre-train a model fPT on an unlabeled PT dataset of ECGs DPT = {xi}|DPT|i=1 using SimCLR PT [8], and then fine-tune it on the labeled FT dataset DFT = {(xi, yi)}|DFT|i=1 , where the FT labels y ∈ {0, 1}5 encode whether the signal contains certain features indicative of particular diseases/pathologies. Further dataset details in Appendix G. ECG Data Augmentations. To augment each ECG for SimCLR (example in Appendix G, Figure 6), we apply three transformations in turn (based on prior work in time series augmentation [30, 66]): 1. Random cropping: A randomly selected portion of the signal is zeroed out. 2. Random jittering: IID Gaussian noise is added to the signal. 3. Random temporal warping: The signal is warped with a random, diffeomorphic temporal transformation. This is formed by sampling from a zero mean, fixed variance Gaussian at each temporal location in the signal to obtain a velocity field, and then integrating and smoothing (following [4, 5]) to generate a temporal displacement field, which is applied to the signal. Test AUC at different FT dataset sizes |DFT| FT dataset size |DFT| 100 250 500 1000 2500 No PT 71.5 ± 0.7 76.1 ± 0.3 78.7 ± 0.3 82.0 ± 0.2 84.5 ± 0.2 SimCLR 74.6 ± 0.4 76.5 ± 0.3 79.8 ± 0.3 82.2 ± 0.3 85.8 ± 0.1 Meta-Parameterized SimCLR 76.1 ± 0.5 77.8 ± 0.4 81.7 ± 0.2 84.0 ± 0.3 86.7 ± 0.1 Table (2) Meta-Parameterized SimCLR obtains improved semi-supervised learning performance. Table showing mean AUC/standard error over seeds across 5 FT binary classification tasks for baselines and meta-parameterized SimCLR at different sizes of DFT, with D(Meta)FT = DFT. We observe improvements in performance with meta-parameterized SimCLR, which optimizes the augmentation pipeline. Meta-Parameterized SimCLR. To construct a meta-parameterized SimCLR PT scheme, we instantiate meta-parameters φ as the weights of a neural network w(x;φ) that takes in an input signal and outputs the warp strength: the variance of the Gaussian that is used to obtain the velocity field for temporal warping. This parameterization permits signals to be warped more/less aggressively depending on their individual structure. With this definition, the SimCLR PT loss is directly a function of the meta-parameters, and we can use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [64] for computing the relevant gradient terms. Baselines. Our experimental goals suggest the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • SimCLR: Pre-train a model using SimCLR with the above three augmentations without learning per-example temporal warping strengths. Experimental Details. We standardize the experimental setup to facilitate comparisons. All methods use a 1D CNN based on a ResNet-18 [23] architecture. The temporal warping network w(x;φ) is a four layer 1D CNN. SimCLR PT takes place for 50 epochs for all methods, over three PT seeds. At evaluation time, for all methods, we initialize a new FT network head over the PT network feature extractor and FT the whole network for 200 epochs, over five FT seeds. Validation set AUC is used for early stopping. We consider two experimental settings: (1) Full FT Access, standard SSL: consider different sizes of the labelled FT dataset DFT and make all the FT data available at meta-PT time, D(Meta)FT = DFT; and (2) Partial FT Access, examining data efficiency of our algorithm: SSL when only limited FT data is available at meta-PT time: D(Meta)FT ⊆ DFT. We evaluate performance across the 5 binary classification tasks in both settings. Further details are provided in Appendix G. 6.2 Results Key Findings. By optimizing the data augmentation policy used in SimCLR PT, meta-parameterized SimCLR improves performance on the FT problem of detecting pathologies from ECG data. Even a small amount of FT data provided at meta-PT time can lead to improved FT performance. Table 2 shows results for the Full FT Access setting, D(Meta)FT = DFT: mean AUC/standard error over seeds across the 5 FT binary classification tasks at different sizes of DFT. We observe that meta-parameterized SimCLR improves on other baselines in all settings. Note that while these gains are modest, they are obtained with simple augmentation policies; our method may yield further improvements if applied to policies with more scope to specialize the augmentations. Next, we consider the Partial FT Access scenario where D(Meta)FT ⊆ DFT, which is relevant when we only have a small amount of FT data at meta-PT time. Fixing |DFT| = 500, we find that with |D(Meta)FT | as small as 50, we obtain test AUC of 81.3 ± 0.5, compared to 79.8 ± 0.3 with no optimization of augmentations: this shows that even small |D(Meta)FT | appear to be sufficient for meta-parameter learning. Further results showing performance curves varying |D(Meta)FT | are in Appendix G. Further experiments. In Appendix G, we study other aspects of our method on this domain, including: (1) Exploring different values of K, the number of FT steps differentiated through when obtaining meta-parameter gradients; and (2) Examining a meta-parameter learning baseline where augmentations are optimized for supervised learning, using the method in [42], and then applied to semi-supervised learning (to compare how optimizing augmentations for supervised learning compares to optimizing them for semi-supervised learning). We find that our method is not very sensitive to the value of K (provided K > 0), and that it outperforms this additional baseline. 7 Related Work Gradient-based hyperparameter optimization (HO): Gradient-based HO roughly falls into two camps. The simpler and less scalable approach differentiates through training [12, 44]. The other approach assumes that optimization reaches a fixed point, and approximates the best-response Jacobian [7, 41, 43, 42]. Neither of these approaches can be straightforwardly applied to scalably differentiate through two stages of optimization (PT & FT). Direct differentiation through both stages would be too memory-intensive. Approximating the best-response Jacobian using the IFT as in [42] twice is feasible, but requires changing the FT objective to include a proximal term [55], and tuning two sets of interacting approximations. Instead, we compose a constant-memory IFT approximation for the lengthy PT stage with an exact backprop-through-training for the shorter FT stage. Applications of Nested Optimization: Many prior works frame learning as nested optimization, including few-shot learning [16, 1, 17, 55, 21, 58, 53, 75, 31, 38], neural network teaching [14, 15, 62, 54], learning data augmentation and reweighting strategies [32, 22, 57, 60, 29], and auxiliary task learning [49, 51, 39]. The majority of this work studies nested optimization in the standard one-stage supervised learning paradigm, unlike our setting: the two-stage PT & FT problem. The most closely related works to ours are [70], where PT task weights are learned for a multitask PT problem using electronic health record data, and [71], where a masking policy is learned for masked language modelling PT. In contrast to our work, which introduces the more general framing of meta-parameter optimization, [70] and [71] are focused only on specific instantiations of meta-parameters as task weights and masking policies. The learning algorithms in these works either: differentiate directly through truncated PT & FT [71] (which may not be scalable to longer PT/large encoder models), or leverage extensive first-order approximations [70], unlike our more generally applicable approach. 8 Scope and Limitations Our gradient-based algorithm applies in situations where we want to optimize (potentially highdimensional) PT hyperparameters, or meta-parameters, and have access to a model, PT data, and FT data. We demonstrated that even limited FT data availability can be sufficient to guide metaparameter learning; however, our method would not apply when no FT data at all is available at meta-PT time, or if the model or PT data were not available. Our algorithm requires meta-parameters to be differentiable, and cannot directly be used to optimize meta-parameters that do not affect the PT optimization landscape (e.g., PT learning rates). 9 Conclusion In this work, we studied the problem of optimizing high-dimensional pre-training (PT) hyperparameters, or meta-parameters. We formalized Meta-Parameterized Pre-Training, a variant of standard PT incorporating these meta-parameters, and proposed a gradient-based algorithm to efficiently learn meta-parameters by approximately differentiating through the two-stage PT & FT learning process. In experiments, we used our algorithm to improve predictive performance on two real-world PT tasks: multitask PT with graph structured data [28], and self-supervised contrastive PT on electrocardiogram signals using SimCLR [8]. Future work could apply our method to learn other potential instantiations of meta-parameters, such as learned auxiliary tasks and noise models. Societal Impact. Our contribution in this work is methodological, namely a new algorithm to optimize high-dimensional pre-training hyperparameters. We do not expect there to be direct negative societal impacts of this contribution. However, to evaluate our method, we considered an experimental domain using healthcare data. Given the high risk nature of this domain, before use in real-world settings, the method should be validated in retrospective and prospective studies. This is to detect any failure modes and identify potential harm that may come from deploying it. Acknowledgements This work was supported in part by funds from Quanta Computer, Inc. The authors thank the members of the Clinical and Applied Machine Learning group at MIT and Paul Vicol for helpful feedback.
1. What is the main contribution of the paper in terms of solving the problem of meta-learning hyperparameters in pre-training stages? 2. What are the strengths of the proposed approach, particularly in combining gradient-based hyperparameter optimization methods? 3. What are the weaknesses of the paper regarding the short horizon bias and the limitation of IFT based methods? 4. How does the reviewer assess the effectiveness of the proposed method compared to baselines in the experiments? 5. What are the suggestions provided by the reviewer for improving the method, such as dealing with short horizon bias or incorporating additional baselines?
Summary Of The Paper Review
Summary Of The Paper This paper propose to solve a very important problem, meta-learning hyperparameters in pre-training stage, followed by fine-tuning a target task. Considering that the hyperparameters can be high-dimensional, the authors consider gradient-based hyperparameter optimization (HO) methods such as IFT based methods and unrolled differentiation. The main difficulty comes from the huge computational cost of dealing with long PT and FT trajectory and repeating such meta-optimization steps. Authors thus simply reduce the number of gradient steps used for each PT and FT stage. Yet, the experimental results demonstrate the effectiveness of the proposed methods over some of the simple baselines. Review == Pros == I think the paper tackles the very important and interesting problem, meta-learning of PT hyperparameters followed by FT. As far as I know, there are few literatures that explicitly tackle this problem. The problem is challenging because both PT and FT involve long optimization trajectory in practice, leading the optimization problem computationally very expensive. Therefore, I would say that the motivation of this paper is very clear and important, in terms of extending the current range of meta-learning and hyperparameter-optimization study. The paper is well written and easy to understand, combining the two most popular techniques for HO (IFT and unrolled diff.). The experimental results show that although the proposed method is simple and straightfoward, it could gain some improvements over the baselines they considered. == Cons == As far as I understand, the main difficulty should come from the long PT and FT optimization trajectory used in practice (e.g. each at least thousands of SGD steps). However, the authors simply set each of them to very short number of steps, such as P = 10 and K = 1 for both of the real-world experiments. It will lead to short horizon bias, with the solution being less appealing. For example, Shin et al. recently proposed a method that can increase the frequency of meta-updates even with long inner-optimization trajectories. It would be better if the authors could think of some ways to deal with the short horizon bias in any way, because exactly that is the main challenge of this problem. The proposed method is fairly straightforward to think of, combining the two existing techniques for HO and unrolled optimization (MAML). I agree that the method is reasonable, but not sure how much it is technically contributing. Also, as mentioned in section 8 Scopes and Limitations, IFT based methods cannot handle hyper-parameters that do not affect the PT optimization landscape. In the experiments, I think the baselines are too few. Specifically, there are no baselines that can learn the hyperparameter ϕ . What if we learn ϕ only with the PT dataset based on conventional HO framework? We may split the D PT into D PT train and D PT val , and optimize ϕ with Neumann IFT method as you did. I expect it will work well because the authors reported good performance on Partial FT-data setting, where we only use a fraction of FT tasks for meta-training and meta-test with the exclusive set of FT tasks no seen during the meta-training. It means that whatever tasks we use for learning ϕ , the learned ϕ will generalize well to unseen tasks. If it works well, then the importance of meta-learning over PT-FT framework will become questionable. (minor comment) There is no qualitative analysis. It would be interesting to visualize the learned ϕ and give the readers some intuition how it helped with the performance (or any other visualization). (minor comment) Missing reference for meta-learning of self-supervised learning : Kang et al. = References = Shin et al., Large-Scale Meta-Learning with Continual Trajectory Shifting, ICML 2021 Kang et al., Neural Mask Generator: Learning to Generate Adaptive Word Maskings for Language Model Adaptation, EMNLP 2020
NIPS
Title Meta-learning to Improve Pre-training Abstract Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%. 1 Introduction A popular and important learning paradigm for neural networks is pre-training (PT) followed by finetuning (FT), an approach commonly used in transfer learning [13, 59, 19, 27, 52, 11, 37, 74, 35, 28], and semi-supervised learning [9, 8, 24]. This paradigm has led to performance improvements in many domains, including computer vision [13, 59, 19, 37, 74, 35], natural language processing [27, 52, 11, 40, 34], graph structured prediction [28], and clinical machine learning [45, 46, 2, 48], and is especially helpful in settings where downstream tasks have limited training data. The PT & FT paradigm introduces high-dimensional, complex PT hyperparameters, such as parameterized data augmentation policies used in contrastive representation learning [8, 22] or the use of task, class, or instance weighting variables in multi-task PT to avoid negative transfer [70]. These hyperparameters can significantly affect the quality of pre-trained models [8], and thus finding techniques to set their values optimally is an important area of research. Choosing optimal PT hyperparameter values is challenging, and existing methods do not work well. Simple approaches such as random or grid search are inefficient since evaluating a hyperparameter setting requires performing the full, two-stage PT & FT optimization, which may be prohibitively computationally expensive. Gradient-free approaches, such as Bayesian optimization or evolutionary algorithms [33, 61, 47], are also limited in how well they scale to this setting. Gradient-based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). approaches [44, 41, 43, 42] can be used online to jointly learn hyperparameters and model parameters and can scale to millions of hyperparameters [42], but typically deal with a standard single-stage learning problem (e.g., normal supervised learning) and are therefore not directly applicable to the two-stage PT & FT learning problem. In this work, we address this gap and propose a method for high-dimensional PT hyperparameter optimization. We first formalize a variant of the PT & FT paradigm, which we call meta-parameterized pre-training (Figure 1), where meta-parameters refer to arbitrary PT hyperparameters or parameterizable architectural choices that can be optimized to improve the learned representations.1 We outline a meta-learning problem characterizing the optimal meta-parameters propose a gradient-based method to learn meta-parameters. Our contributions are: • We formalize meta-parameterized pre-training, a variant of the pre-training and fine-tuning (PT & FT) paradigm where PT is augmented to incorporate meta-parameters: arbitrary structures that can be optimized to improve learned representations. • We propose a scalable gradient-based algorithm to learn meta-parameters using a novel method to obtain meta-parameter gradients through the two-stage PT & FT process. Our gradient estimator composes a constant-memory implicit differentiation approximation for the longer PT stage and exact backpropagation through training for the shorter FT stage. • We show that our algorithm recovers optimal meta-parameters in toy experiments on synthetic data. • In two real-world experimental domains, we demonstrate our algorithm improves performance. Firstly, on a multitask PT benchmark over biological graph-structured data [28], using our method to optimize meta-parameters representing task weights improves performance by up to 3.9% AUROC. Secondly, for semi-supervised learning using SimCLR [8] over electrocardiography data, using our algorithm to optimize meta-parameters representing the weights of a data augmentation neural network improves performance by up to 1.9% AUROC. 2 Problem Setup and Preliminaries In this section, we define the meta-parameterized pre-training meta-learning problem, and compare it to traditional fine-tuning and pre-training. A full glossary of notation is in Appendix B, Table 3. Notation. Let the subscript • be a placeholder for either PT (pre-training) or FT (fine-tuning), X ⊆ Rd be our input domain, Y• and Ŷ• be the true and predicted output spaces for some model respectively, and Θ,Ψ•,Φ be spaces of parameters for models. We will use f• : X ; (Θ,Ψ•)→ Ŷ• to refer to a parametric model, with the semicolon separating the input space from the parameter spaces. We then define f• = f (head) • ◦ f (feat), such that f (feat)(·;θ ∈ Θ) is a feature extractor that is transferable across learning stages (e.g., pre-training to fine-tuning), and f (head)• (·;ψ ∈ Ψ•) is a stage-specific head that is not transferable. Given a data distribution x•, y• ∼ D•, parametric model f•, and loss function L• : Ŷ• × Y• → R, we will also define for convenience a corresponding expected loss L• : Θ,Ψ• → R via L•(θ,ψ•;D•) = ED• [L•(f•(x•;θ,ψ•), y•)]. We also adopt the convention that the output of the argmin operator is any arbitrary minimum, rather than the set of possible minima, to avoid complications in notation. 2.1 Problem Formulation Supervised Learning (Fig. 1A). In a fully-supervised setting (our fine-tuning domain), we are given a data distribution DFT, model f , and loss LFT. Using a learning algorithm AlgFT (e.g., SGD) that takes as input initial parameters θ(0)FT ,ψ (0) FT , our goal is to approximate the LFT-optimal parameters: θ∗FT,ψ ∗ FT = AlgFT(θ (0) FT ,ψ (0) FT ;DFT) ≈ argminθ∈Θ,ψ∈ΨFT LFT(θ,ψ;DFT) Pre-training (Fig. 1B). For tasks where data is scarce, we can additionally incorporate a pretraining step and approximate the optimal initial parameters for FT (i.e., the final pre-trained weights are used as initialization weights of the FT stage), again via an optimization algorithm AlgPT: θ∗PT = AlgPT(θ (0) PT ,ψ (0) PT ;DPT) ≈ argminθ∈Θ LFT(AlgFT(θ,ψ (0) FT ;DFT);DFT). 2 1We use the term meta-parameter since these structures do not directly affect inference of the final model after FT, but instead inform the process of learning this model (by modulating the PT process). 2Note that we discard the PT head ψ∗PT here as only the PT feature extractor θ ∗ PT is transferred. Figure (1) Meta-Parameterized Pre-Training. A paradigm where meta-parameters — rich, potentially high dimensional structures that generalize PT hyperparameters — are incorporated in PT to improve the learned representations. Meta-parameters are optimized in a meta-PT phase, using data from FT task(s) in a meta-FT dataset. The FT and meta-FT datasets are (potentially overlapping) samples from the FT data distribution. Meta-Parameterized PT (Fig. 1C). In Meta-Parameterized PT, we recognize that, in addition to taking as input the PT parameters θ, AlgPT is itself parameterized by a set of meta-parameters φ ∈ Φ: arbitrary, potentially high dimensional quantities that inform the structure of the algorithm directly. These could represent weighting strategies, data augmentation policies, or sampling processes. The optimal meta-parameters φ(opt) are the solution to the following meta-PT optimization problem: φ(opt) = argmin φ∈Φ LFT ( AlgFT ( AlgPT ( θ (0) PT ,ψ (0) PT ;DPT,φ ) ,ψ (0) FT ;DFT ) ;DFT ) . 2.2 Example: Multitask Meta-Parameterized Pre-Training To make our notation concrete, here we instantiate our setup for a multitask pre-training problem. Problem: Suppose we have a multitask classification dataset, (X × Y)N such that Y = Y1 × · · · × YK consists of labels for K distinct tasks. Of this full set of tasks, we are interested only in a subset of M tasks, S = {t1, . . . , tM} ⊆ {1, . . . ,K}. Supervised FT: Under supervised FT alone, we can directly average a cross-entropy loss LCE over only the tasks in S, LFT(ŷ,y) = 1M ∑M j=1 LCE(ŷ(tj), y(tj)), and then solve this problem via SGD. PT: If we assume that S is a random subset of the full set of tasks, we can introduce a PT stage over all tasks: LPT(ŷ,y) = 1K ∑K i=1 LCE(ŷ(i), y(i)), followed by FT on S alone. As S is a random subset, leveraging all tasks for PT is well motivated and may improve performance. Meta-Parameterized PT: In the case where T is not a random subset, the PT strategy described above is no longer well-motivated. However, using meta-parameterized PT, we can still effectively pre-train by introducing the meta-parameters that weight the tasks φ = [φ1 . . . φK ] and modulate the loss function LPT: LPT(ŷ,y;φ) = ∑K i=1 φiLCE(ŷ(i), yi). With optimal meta-parameters φ (opt), the PT stage will leverage only that subset of tasks that best informs the final FT performance. This setting mirrors our real-world experiment in Section 5. 3 Methods: Optimizing Meta-Parameters for Two-Stage Training We now introduce our gradient-based algorithm to optimize meta-parameters. We first describe how to efficiently approximate meta-parameter gradients through the two-stage PT and FT optimization. We then present our algorithm, and outline practical considerations when using it. 3.1 Efficient Computation of Meta-Parameter Gradients We begin by defining: g(φ;θ (0) PT ,ψ (0) PT ,ψ (0) FT ) = LFT ( AlgFT ( Parameter θPT︷ ︸︸ ︷ AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ),ψ (0) FT ;DFT )︸ ︷︷ ︸ Parameters θFT,ψFT ;DFT ) , (1) so that φ(opt) = argminφ∈Φ g(φ). We also define two best-response values: θ∗PT(φ) = AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ), θ∗FT(φ), ψ ∗ FT(φ) = AlgFT(θ ∗ PT(φ),ψ (0) FT ;DFT). We do not explicitly include the dependence of the best responses on the initialization values for notational convenience. With these defined, we now consider the desired gradient term, ∂g∂φ . Under our definitions, the direct partial derivatives ∂LFT∂φ and ∂AlgFT ∂φ are zero, so ∂g ∂φ reduces to a simple expression of the chain rule: ∂g ∂φ ∣∣∣∣ φ′ = ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ∗FT(φ ′),ψ∗FT(φ ′)︸ ︷︷ ︸ FT Loss Gradient × FT Best Response Jacobian︷ ︸︸ ︷ ∂AlgFT ∂θPT ∣∣∣∣ θ∗PT(φ ′) × ∂AlgPT ∂φ ∣∣∣∣ φ′︸ ︷︷ ︸ PT Best Response Jacobian . (2) The FT Loss Gradient term on the RHS of (2) is easily computed using backpropagation. Computing the other two terms is more involved, and we detail each below, beginning with the PT best response Jacobian. The full algorithm with both gradient estimation terms is provided in Algorithm 1. PT Best Response Jacobian ∂AlgPT∂φ . Using recent work in hyperparameter optimization with implicit differentiation [42], we re-express this term using the implicit function theorem (IFT). If we assume that θ∗PT(φ) = AlgPT ( θ (0) PT ;DPT,φ ) is a good approximation of argminθ∈Θ LPT (θ;DPT,φ) (i.e., the PT model converges to LPT-optimal parameters), then under certain smoothness and regularity assumptions on the PT parameters and meta-parameters, the IFT allows us to re-express ∂AlgPT∂φ as: ∂AlgPT ∂φ ∣∣∣∣ φ′ = − [ ∂2LPT ∂θPT ∂θ>PT ]−1 × ∂ 2LPT ∂θPT ∂φ > ∣∣∣∣ θ∗PT(φ ′),φ′ , (3) which is the product of the inverse Hessian and a matrix of mixed partial derivatives. Following [42], the inverse can be efficiently approximated using a truncated Neumann series. FT Best Response Jacobian ∂AlgFT∂θPT . First, note that without additional constraints on AlgFT, the FT best response Jacobian may be zero. This is because LFT has no functional dependence on the variable θPT and, if we assume the convergence point θ∗FT is stable (as we did for the PT best response Jacobian), this implies that the gradient of θ∗FT with respect to θPT would be zero. To enable effective learning, we must therefore either (1) impose restrictions on AlgFT to ensure there is a dependence between the initialization point and the final loss value (e.g., proximal regularization [55]) or (2) leverage methods that do not differentiate through AlgFT through convergence, as at non-converged points we will still observe nonzero LFT-gradients [29, 51]. Given that the FT phase often involves shorter optimization horizons than PT, we take approach 2 here, and iteratively update θFT for K steps. We first initialize the FT head ψ(0)FT and then compute: θ (0) FT = copy(θ ∗ PT) (init with PT solution, implicitly performing stop gradient) θ (k) FT ,ψ (k) FT = [ θ (k−1) FT , ψ (k−1) FT ] − ηFT ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ (k−1) FT ,ψ (k−1) FT k = 1, . . . ,K θ∗FT,ψ ∗ FT ≈ θ (K) FT ,ψ (K) FT , (4) and compute the gradient ∂AlgFT∂θPT ∣∣∣ θ∗PT(φ ′) by differentiating through this optimization.3 We can also choose to freeze the feature extractor parameters θFT and update only the head parameters ψFT during truncated FT, and use this to obtain meta-parameter gradients. This resembles linear evaluation, where a linear classifier is trained on top of fixed, pre-trained feature extractors [50, 3, 63]. Together, these two approximations allow for efficient computation of meta-parameter gradients. 3While Equation 4 uses standard gradient descent, we could use other differentiable optimizers (e.g., Adam). Algorithm 1 Gradient-based algorithm to learn meta-parameters. Notation defined in Appendix B, Table 3. Vector-Jacobian products (VJPs) can be efficiently computed by standard autodifferentiation. 1: Initialize PT parameters θ(init)PT ,ψ (init) PT ,ψ (0) FT and meta-parameters φ (0) 2: for n = 1, . . . , N iterations do 3: Initialize θ(0)PT = θ (init) PT and ψ (0) PT = ψ (init) PT . 4: for p = 1, . . . , P PT iterations do 5: [ θ (p) PT ,ψ (p) PT ] = [ θ (p−1) PT ,ψ (p−1) PT ] − ηPT ∂LPT ∂[θPT,ψPT] ∣∣∣∣ θ (p−1) PT ,ψ (p−1) PT 6: end for 7: Initialize FT encoder with PT solution: θ(0)FT = copy(θ (P ) PT ). 8: Approximate θ∗FT,ψ ∗ FT using Eq. 4. 9: Compute g1 = ∂LFT ∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT 10: Compute VJP g2 = g1 ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT using the unrolled learning step from line 8. 11: Approximate VJP ∂g∂φ ∣∣∣ φ(n−1) = g2 ∂AlgPT ∂φ ∣∣∣ φ(n−1) using the IFT (Eq. 3). 12: φ(n) = φ(n−1) − ηV ∂g∂φ ∣∣∣ φ(n−1) 13: Update PT initialization by setting: θ(init)PT = θ (P ) PT and ψ (init) PT = ψ (P ) PT . 14: end for 3.2 Our Algorithm and Practical Considerations By leveraging the above approximations, we obtain Algorithm 1 to optimize meta-parameters φ online during PT & FT of the base model. Note that AlgPT is explicitly written out as a sequence of gradient updates (lines 4-6 in Algorithm 1). We now discuss practical considerations when using this algorithm, with further details given in Appendix C. (1) Access to DFT and generalizing to new FT tasks: Solving the meta-PT problem requires availability of: the model f•, the PT data DPT, and the FT data DFT. In this work, we assume availability of the model and PT dataset, but since assuming access to the complete FT dataset at meta-PT time is more restrictive, we study two scenarios: Full FT Access, where all FT data that we expect to encounter is available at meta-PT time, and Partial FT Access, where the FT data available at meta-PT time is only a sample from a distribution of FT data that we may encounter later. Full FT Access occurs in settings like semi-supervised learning, where we are given a large unlabelled PT dataset and a small labelled FT dataset and our goal is to achieve the best possible performance by leveraging these two fixed datasets [68, 73, 25, 24, 8, 9]. Partial FT Access occurs when our goal is to learn transferable representations: at meta-PT time, we might have limited knowledge of FT tasks or data. In evaluating this scenario, we examine generalizability to new FT tasks, given only small amounts of FT data/task availability at meta-PT time, demonstrating that even very limited FT access can be sufficient for effective meta-parameter optimization [11, 45, 56, 28]. (2) DFT splits: In practice, we have access to finite datasets and use minibatches, rather than true datagenerating processes. Following standard convention, we splitDFT into two subsets for meta-learning: D(tr)FT and D (val) FT (independent of any held-out DFT testing split), and define the FT data available at meta-PT time as D(Meta)FT = D (tr) FT ∪ D (val) FT . We use D (tr) FT for the computation of ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT and ∂AlgPT ∂φ ∣∣∣ φ(n−1) and D(val)FT for the computation of ∂LFT∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT in Algorithm 1. (3) Online updates: Given that PT phases often involve long optimization horizons, for computational efficiency, we update θPT andψPT online rather than re-initializing them at every meta-iteration (see Algorithm 1). FT phases are often shorter so we could in theory re-initialize ψFT at each meta-iteration, as is presented in Algorithm 1. However, it is more computationally efficient to also optimize this online, and we follow this approach in our experiments. A description of the algorithm with these details in Appendix C. Note that prior work [67] has suggested that online optimization of certain hyperparameters (e.g., learning rates) using short horizons may yield suboptimal solutions. We comment on this in Appendix C, study this effect for our algorithm in synthetic experiments in Appendix E, and in real-world experiments on self-supervised learning in Appendix G, revealing it is not a significant concern. (4) Computational tractability: Our method can scale to large encoder models and highdimensional meta-parameters, despite the complexity of the two-stage PT & FT process. This is because: (i) meta-parameters are optimized jointly with the base model parameters; (ii) using the IFT to obtain gradients has similar time and memory complexity to one iteration of training [42]; (iii) the FT best response Jacobian can be approximated efficiently using a small number of unrolled optimization steps K, and by only unrolling the FT head of the network. In our real-world experiments (Sections 5 and 6), meta-parameterized PT has less than twice the time cost of standard PT. Further details on time and memory cost are provided in Appendices F and G. (5) Setting optimizer parameters: Learning rates and momentum values can impact the efficacy of the algorithm. A discussion on how to set them in practice is provided in Appendix D. 4 Synthetic Experiments We validate that our algorithm recovers optimal low and high dimensional meta-parameters in two synthetic MNIST experiments with Full FT Access. Further details and results are provided in Appendix E, including a study of how our method performs comparably to differentiating exactly through the entire learning process of PT & FT, without approximations. First, we optimize low dimensional meta-parameters characterizing a data augmentation scheme. We tune a 1-D meta-parameter φ representing the mean of a Normal distribution N (φ, 12) from which we sample rotation augmentations to apply to PT images. FT images undergo rotations from a Normal distribution N (µFT, 12) with µFT = 90◦; we therefore expect that φ should converge to near µFT. Using Algorithm 1 to optimize φ we find that the mean error in the optimized meta-parameter over 10 different initializations is small: 7.2± 1.5◦, indicating efficacy of the algorithm. Next, we consider learning high dimensional meta-parameters that characterize a PT per-example weighting scheme. The PT dataset contains some examples that have noisy labels, and FT examples all have clean labels. The meta-parameters are the parameters of a neural network that assigns importance weights to each PT example, which is used to weight the loss on that example during PT. We use Algorithm 1 again to optimize φ, over 10 random initializations, finding the ratio of assigned importance weights between clean label PT examples and noisy label PT examples is greater than 102. This is expected since the noisy label classes may worsen the quality of the PT model and so should be down-weighted. 5 Meta-Parameterized Multitask Pre-Training for Graph Neural Networks We consider optimizing PT task weights for a multitask PT & FT problem of predicting the presence of protein functions (multitask binary classification) given graph-structured biological data as input. We have two experimental goals: first, in the Full FT Access setting, where methods are given access to all FT data at PT time, we evaluate whether optimizing task weighting meta-parameters can improve predictive performance on the FT tasks. Second, motivated by how in typical transfer learning problems, new tasks or labels not available at PT time may become available at FT time, we study the Partial FT Access setting, investigating how our method performs when it only sees limited FT tasks at PT time. In both settings, our method outperforms baselines. 5.1 Problem Setup Dataset and Task. We consider the transfer learning benchmark introduced in [28], where the prediction problem at both PT and FT is multitask binary classification: predicting the presence/absence of specific protein functions (y) given a Protein-Protein Interaction (PPI) network as input (rep- resented as a graph x). The PT dataset has pairs DPT = {(xi, yi)}|DPT|i=1 , where y ∈ {0, 1}5000 characterizes the presence/absence of 5000 particular protein functions. The FT dataset has pairs DFT = {(xi, yi)}|DFT|i=1 , where y ∈ {0, 1}40 now characterizes the presence/absence of 40 different protein functions. Further dataset details in Appendix F. Meta-Parameterized Multitask PT. To define a meta-parameterized PT scheme, we let metaparameters φ ∈ R5000 be weights for the binary PT tasks. Then, we define a PT loss incorporating the weights: LPT = 15000 ∑5000 i=1 2 σ(φi) LCE(fPT(x;θPT,ψPT)i, yi),with i indexing the tasks, σ(·) representing the sigmoid function (to ensure non-negativity and clamp the range of the weights), and LCE denoting the binary cross-entropy loss. With this loss defined, we use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [28] for computing the relevant gradient terms. Baselines. Motivated by our goals, we compare with the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • Graph Supervised PT: As explored in prior work on this domain [28], perform multitask super- vised PT with DPT. This corresponds to setting all task weights to 1: φi = 1, i = 1, . . . , 5000. • CoTrain: A common baseline that makes use of the FT data available during PT [70] (like meta- parameterized PT). We PT a model with 5000+40 outputs (covering the space of PT and FT labels) jointly on both DPT and DFT. We do so by alternating gradient updates on batches sampled from each dataset in turn. Further details are in Appendix F. • CoTrain + PCGrad: An extension of CoTrain, where we leverage the method PCGrad [72] to perform gradient projection and prevent destructive gradient interference between updates from DPT and DFT. Further details and variants we tried are in Appendix F. Experimental Details. We use a standardized setup to facilitate comparisons. Following [28], all methods use the Graph Isomorphism Network architecture [69], undergo PT for 100 epochs, and FT for 50 epochs, over 5 random seeds, using early stopping based on validation set performance. During FT, we initialize a new FT network head and either FT the whole network or freeze the PT feature extractor and learn the FT head alone (Linear Evaluation [50]). We report results for the strategy that performed best (full results in the appendix). We consider two experimental scenarios: (1) Full FT Access: Provide methods full access to DPT and DFT at PT time (D(Meta)FT = DFT) and evaluate on the full set of 40 FT tasks; (2) Partial FT Access: Limit the number of FT tasks seen at PT time, by letting D(Meta)FT include only 30 of the 40 FT tasks. At FT time, models are fine-tuned on the held-out 10 tasks not in D(Meta)FT . We use a 4-fold approach where we leave out 10 of the 40 FT tasks in turn, and examine performance across these 10 held-out tasks, over the folds. 5.2 Results Key Findings. By optimizing PT task weights, meta-parameterized multitask PT improves performance on the FT problem of predicting presence/absence of protein functions given a protein-protein interaction graph as input. Performance improvements are also seen when generalizing to new FT tasks (protein functions), unseen at meta-PT time. Table 1 presents quantitative results for the two experimental settings described. For the No PT and Graph Supervised PT baselines, we re-implement the methods from [28], obtaining improved results (full comparison in Appendix Table 5). In both full and partial FT access settings, meta-parameterized PT improves significantly on other methods, indicating that optimizing meta-parameters can improve predictive performance generally, and be effective even when new, related tasks are considered at evaluation time. Interestingly, we observe that CoTrain and CoTrain + PCGrad obtain relatively poor performance compared to other baselines; this could be because the methods overfit to the FT data during PT. Further analysis of this is presented in Appendix F. Further experiments. In Appendix F, we study another partial FT access scenario with smaller D(Meta)FT , setting ∣∣∣D(Meta)FT ∣∣∣ = 0.5 |DFT|, and find that meta-parameterized PT again outperforms other methods. (Table 7). We also examine another meta-parameter learning baseline, namely a version of CoTrain where we optimize task weights using a traditional hyperparameter optimization algorithm [42] jointly with the main model. We find that our method outperforms this baseline also (Table 5). Method AUC (D(Meta)FT = DFT) AUC (D (Meta) FT excludes tasks) No PT 66.6 ± 0.7 65.8 ± 2.5 Graph Supervised PT 74.7 ± 0.1 74.8 ± 1.8 CoTrain 70.2 ± 0.3 69.3 ± 1.8 CoTrain + PCGrad 69.4 ± 0.2 68.1 ± 2.3 Meta-Parameterized PT 78.6 ± 0.1 77.0 ± 1.3 Table (1) Meta-Parameterized PT improves predictive performance over baselines. Table showing mean AUC and standard error for two evaluation settings. When provided all FT data at PT time (first results column), meta-parameterized PT significantly improves predictive performance. In a more challenging setting when D(Meta)FT excludes FT tasks (10 of the 40 available tasks are held-out), evaluating mean AUC/standard error across four folds with each set of 10 FT tasks held out in turn, meta-parameterized PT again obtains the best performance: it is effective even with partial information about the downstream FT tasks. Analysis of learned structures. In Appendix F, we conduct further analysis and study the effect of various PT strategies on the pre-trained representations (Figure 3), finding intuitive patterns of similarity between different methods. We also examine the learned task weights (Figure 4), and examine performance on a per-FT task basis with/without meta-parameterized PT (Figure 5), finding little evidence of negative transfer. 6 Meta-Parameterized SimCLR for Semi-Supervised Learning with ECGs We now explore a second real-world application of our method: optimizing a data augmentation policy for self-supervised PT with SimCLR [8, 9] on electrocardiograms (ECGs). SimCLR is a popular self-supervised PT method that leverages data augmentations to define a contrastive PT objective (details in Appendix G.1). The choice/strength of the augmentations used significantly impacts the effectiveness of the algorithm [8]. In settings where relevant augmentations are known (e.g., natural images), SimCLR is readily applicable; however, for ECGs, effective augmentations are less clear, motivating the use of our algorithm to optimize the augmentation pipeline. We have two experimental goals. Firstly, we examine the typical semi-supervised learning setting of Full FT Access: we explore whether optimizing the augmentations in SimCLR PT can improve performance on the supervised FT task of detecting pathologies from ECGs, given access to all FT data at meta-PT time. Secondly, to study the data efficiency of our method, we consider the Partial FT Access setting and explore performance given access to limited FT data at meta-PT time. We find that our method improves the performance of SimCLR, and that it is effective even with very limited amounts of FT data provided at meta-PT time. 6.1 Problem Setup Dataset and Task. We construct a semi-supervised learning (SSL) problem using PTB-XL [64, 20], an open-source dataset of electrocardiogram (ECG) data. Let the model input at both PT and FT time be denoted by x, which represents a 12-lead (or channel) ECG sampled at 100 Hz for 10 seconds resulting in a 1000 × 12 signal. Our goal is to pre-train a model fPT on an unlabeled PT dataset of ECGs DPT = {xi}|DPT|i=1 using SimCLR PT [8], and then fine-tune it on the labeled FT dataset DFT = {(xi, yi)}|DFT|i=1 , where the FT labels y ∈ {0, 1}5 encode whether the signal contains certain features indicative of particular diseases/pathologies. Further dataset details in Appendix G. ECG Data Augmentations. To augment each ECG for SimCLR (example in Appendix G, Figure 6), we apply three transformations in turn (based on prior work in time series augmentation [30, 66]): 1. Random cropping: A randomly selected portion of the signal is zeroed out. 2. Random jittering: IID Gaussian noise is added to the signal. 3. Random temporal warping: The signal is warped with a random, diffeomorphic temporal transformation. This is formed by sampling from a zero mean, fixed variance Gaussian at each temporal location in the signal to obtain a velocity field, and then integrating and smoothing (following [4, 5]) to generate a temporal displacement field, which is applied to the signal. Test AUC at different FT dataset sizes |DFT| FT dataset size |DFT| 100 250 500 1000 2500 No PT 71.5 ± 0.7 76.1 ± 0.3 78.7 ± 0.3 82.0 ± 0.2 84.5 ± 0.2 SimCLR 74.6 ± 0.4 76.5 ± 0.3 79.8 ± 0.3 82.2 ± 0.3 85.8 ± 0.1 Meta-Parameterized SimCLR 76.1 ± 0.5 77.8 ± 0.4 81.7 ± 0.2 84.0 ± 0.3 86.7 ± 0.1 Table (2) Meta-Parameterized SimCLR obtains improved semi-supervised learning performance. Table showing mean AUC/standard error over seeds across 5 FT binary classification tasks for baselines and meta-parameterized SimCLR at different sizes of DFT, with D(Meta)FT = DFT. We observe improvements in performance with meta-parameterized SimCLR, which optimizes the augmentation pipeline. Meta-Parameterized SimCLR. To construct a meta-parameterized SimCLR PT scheme, we instantiate meta-parameters φ as the weights of a neural network w(x;φ) that takes in an input signal and outputs the warp strength: the variance of the Gaussian that is used to obtain the velocity field for temporal warping. This parameterization permits signals to be warped more/less aggressively depending on their individual structure. With this definition, the SimCLR PT loss is directly a function of the meta-parameters, and we can use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [64] for computing the relevant gradient terms. Baselines. Our experimental goals suggest the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • SimCLR: Pre-train a model using SimCLR with the above three augmentations without learning per-example temporal warping strengths. Experimental Details. We standardize the experimental setup to facilitate comparisons. All methods use a 1D CNN based on a ResNet-18 [23] architecture. The temporal warping network w(x;φ) is a four layer 1D CNN. SimCLR PT takes place for 50 epochs for all methods, over three PT seeds. At evaluation time, for all methods, we initialize a new FT network head over the PT network feature extractor and FT the whole network for 200 epochs, over five FT seeds. Validation set AUC is used for early stopping. We consider two experimental settings: (1) Full FT Access, standard SSL: consider different sizes of the labelled FT dataset DFT and make all the FT data available at meta-PT time, D(Meta)FT = DFT; and (2) Partial FT Access, examining data efficiency of our algorithm: SSL when only limited FT data is available at meta-PT time: D(Meta)FT ⊆ DFT. We evaluate performance across the 5 binary classification tasks in both settings. Further details are provided in Appendix G. 6.2 Results Key Findings. By optimizing the data augmentation policy used in SimCLR PT, meta-parameterized SimCLR improves performance on the FT problem of detecting pathologies from ECG data. Even a small amount of FT data provided at meta-PT time can lead to improved FT performance. Table 2 shows results for the Full FT Access setting, D(Meta)FT = DFT: mean AUC/standard error over seeds across the 5 FT binary classification tasks at different sizes of DFT. We observe that meta-parameterized SimCLR improves on other baselines in all settings. Note that while these gains are modest, they are obtained with simple augmentation policies; our method may yield further improvements if applied to policies with more scope to specialize the augmentations. Next, we consider the Partial FT Access scenario where D(Meta)FT ⊆ DFT, which is relevant when we only have a small amount of FT data at meta-PT time. Fixing |DFT| = 500, we find that with |D(Meta)FT | as small as 50, we obtain test AUC of 81.3 ± 0.5, compared to 79.8 ± 0.3 with no optimization of augmentations: this shows that even small |D(Meta)FT | appear to be sufficient for meta-parameter learning. Further results showing performance curves varying |D(Meta)FT | are in Appendix G. Further experiments. In Appendix G, we study other aspects of our method on this domain, including: (1) Exploring different values of K, the number of FT steps differentiated through when obtaining meta-parameter gradients; and (2) Examining a meta-parameter learning baseline where augmentations are optimized for supervised learning, using the method in [42], and then applied to semi-supervised learning (to compare how optimizing augmentations for supervised learning compares to optimizing them for semi-supervised learning). We find that our method is not very sensitive to the value of K (provided K > 0), and that it outperforms this additional baseline. 7 Related Work Gradient-based hyperparameter optimization (HO): Gradient-based HO roughly falls into two camps. The simpler and less scalable approach differentiates through training [12, 44]. The other approach assumes that optimization reaches a fixed point, and approximates the best-response Jacobian [7, 41, 43, 42]. Neither of these approaches can be straightforwardly applied to scalably differentiate through two stages of optimization (PT & FT). Direct differentiation through both stages would be too memory-intensive. Approximating the best-response Jacobian using the IFT as in [42] twice is feasible, but requires changing the FT objective to include a proximal term [55], and tuning two sets of interacting approximations. Instead, we compose a constant-memory IFT approximation for the lengthy PT stage with an exact backprop-through-training for the shorter FT stage. Applications of Nested Optimization: Many prior works frame learning as nested optimization, including few-shot learning [16, 1, 17, 55, 21, 58, 53, 75, 31, 38], neural network teaching [14, 15, 62, 54], learning data augmentation and reweighting strategies [32, 22, 57, 60, 29], and auxiliary task learning [49, 51, 39]. The majority of this work studies nested optimization in the standard one-stage supervised learning paradigm, unlike our setting: the two-stage PT & FT problem. The most closely related works to ours are [70], where PT task weights are learned for a multitask PT problem using electronic health record data, and [71], where a masking policy is learned for masked language modelling PT. In contrast to our work, which introduces the more general framing of meta-parameter optimization, [70] and [71] are focused only on specific instantiations of meta-parameters as task weights and masking policies. The learning algorithms in these works either: differentiate directly through truncated PT & FT [71] (which may not be scalable to longer PT/large encoder models), or leverage extensive first-order approximations [70], unlike our more generally applicable approach. 8 Scope and Limitations Our gradient-based algorithm applies in situations where we want to optimize (potentially highdimensional) PT hyperparameters, or meta-parameters, and have access to a model, PT data, and FT data. We demonstrated that even limited FT data availability can be sufficient to guide metaparameter learning; however, our method would not apply when no FT data at all is available at meta-PT time, or if the model or PT data were not available. Our algorithm requires meta-parameters to be differentiable, and cannot directly be used to optimize meta-parameters that do not affect the PT optimization landscape (e.g., PT learning rates). 9 Conclusion In this work, we studied the problem of optimizing high-dimensional pre-training (PT) hyperparameters, or meta-parameters. We formalized Meta-Parameterized Pre-Training, a variant of standard PT incorporating these meta-parameters, and proposed a gradient-based algorithm to efficiently learn meta-parameters by approximately differentiating through the two-stage PT & FT learning process. In experiments, we used our algorithm to improve predictive performance on two real-world PT tasks: multitask PT with graph structured data [28], and self-supervised contrastive PT on electrocardiogram signals using SimCLR [8]. Future work could apply our method to learn other potential instantiations of meta-parameters, such as learned auxiliary tasks and noise models. Societal Impact. Our contribution in this work is methodological, namely a new algorithm to optimize high-dimensional pre-training hyperparameters. We do not expect there to be direct negative societal impacts of this contribution. However, to evaluate our method, we considered an experimental domain using healthcare data. Given the high risk nature of this domain, before use in real-world settings, the method should be validated in retrospective and prospective studies. This is to detect any failure modes and identify potential harm that may come from deploying it. Acknowledgements This work was supported in part by funds from Quanta Computer, Inc. The authors thank the members of the Clinical and Applied Machine Learning group at MIT and Paul Vicol for helpful feedback.
1. What is the focus and contribution of the paper regarding gradient-based HPO? 2. What are the strengths and weaknesses of the proposed approach, particularly in its application to pretraining + finetuning settings? 3. Do you have any concerns about the method's ability to converge to accurate hypergradients? 4. How does the reviewer assess the novelty and limitations of the proposed approach compared to prior works? 5. Are there any questions or suggestions for improving the experimental setup or presentation?
Summary Of The Paper Review
Summary Of The Paper This work seeks to extend gradient-based HPO to the two stage setting of pretraining + finetuning. While the method used isn't original it is used in an original problem set up. Review -----------------------------------PROS The PT+FT paradigm considered is original and relevant the paper is mostly clear Considering Full FT access and Partial FT access is a good thing -----------------------------------CONS MAJOR You explain that you update psi_FT online, which may remove memory issues with BPTT, but has been shown to lead to biased (greedy) solutions ("Short horizon bias", Wu2018). These don't converge to hypergradients close to the actual (full horizon) hypergradients we care about. This is a major limitation to your model since the gradients that come from the finetuning stage are in my experience almost as good as random noise, and yet this isn't listed in your Limitations section. A toy model where you can use full-horizon BPTT for both PT and FT would allow you to measure how accurate your algorithm is at approximating hypergradients as the number of steps increase in the PT and FT stage. While experiments seem sensible, I feel like the experiments I would need to see to make sure your method is competitive with sota methods aren't included. For instance, I would have liked to see comparisons with sota methods in multi-task learning and/or semi-supervised learning and/or domain adaptation for common image datasets like CIFAR-10 where lots of other methods have been applied. This is because the main difficulties of gradient-based HPO (e.g. gradient degradation) come in for a large number of steps in the PT/FT stages. In line 196 you seem to be using a very short horizon of P=10, which is nowhere near the 10^4 gradient steps you'd need for a CIFAR10-like dataset. There is also an issue of novelty, since the combination of implicit differentiation and BPTT is fairly straightforward (just multiplication as per chain rule) and one may argue doesn't really make up a "new" algorithm in itself. MINOR Your toy experiments in section 4 are both experiments that don't require your PT + FT 2-stage method. Indeed, these have been done where the properties you apply to the FT data are simply applied to the validation data instead. It would be nice to have experiments where ONLY your 2-stage approach is sensible. I found Figure 1 somewhat confusing. For instance I'm not sure what "Finetuning Data" + "Sample" labels achieve, why the "meta parameters" are connected to "Pre-training" on top of the "Model" and "Pre-train Dataset" labels since potentially the "meta parameters" could include "Model" and "Pre-training dataset". I understood your set up from the equations in 2.1, although a bit more details would have saved me time. For example in line 78 you could explicitely state the the final weights of the PT stage are used as the init weights of the FT stage, such that the final weights of the FT stage minimize the FT loss.
NIPS
Title Meta-learning to Improve Pre-training Abstract Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%. 1 Introduction A popular and important learning paradigm for neural networks is pre-training (PT) followed by finetuning (FT), an approach commonly used in transfer learning [13, 59, 19, 27, 52, 11, 37, 74, 35, 28], and semi-supervised learning [9, 8, 24]. This paradigm has led to performance improvements in many domains, including computer vision [13, 59, 19, 37, 74, 35], natural language processing [27, 52, 11, 40, 34], graph structured prediction [28], and clinical machine learning [45, 46, 2, 48], and is especially helpful in settings where downstream tasks have limited training data. The PT & FT paradigm introduces high-dimensional, complex PT hyperparameters, such as parameterized data augmentation policies used in contrastive representation learning [8, 22] or the use of task, class, or instance weighting variables in multi-task PT to avoid negative transfer [70]. These hyperparameters can significantly affect the quality of pre-trained models [8], and thus finding techniques to set their values optimally is an important area of research. Choosing optimal PT hyperparameter values is challenging, and existing methods do not work well. Simple approaches such as random or grid search are inefficient since evaluating a hyperparameter setting requires performing the full, two-stage PT & FT optimization, which may be prohibitively computationally expensive. Gradient-free approaches, such as Bayesian optimization or evolutionary algorithms [33, 61, 47], are also limited in how well they scale to this setting. Gradient-based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). approaches [44, 41, 43, 42] can be used online to jointly learn hyperparameters and model parameters and can scale to millions of hyperparameters [42], but typically deal with a standard single-stage learning problem (e.g., normal supervised learning) and are therefore not directly applicable to the two-stage PT & FT learning problem. In this work, we address this gap and propose a method for high-dimensional PT hyperparameter optimization. We first formalize a variant of the PT & FT paradigm, which we call meta-parameterized pre-training (Figure 1), where meta-parameters refer to arbitrary PT hyperparameters or parameterizable architectural choices that can be optimized to improve the learned representations.1 We outline a meta-learning problem characterizing the optimal meta-parameters propose a gradient-based method to learn meta-parameters. Our contributions are: • We formalize meta-parameterized pre-training, a variant of the pre-training and fine-tuning (PT & FT) paradigm where PT is augmented to incorporate meta-parameters: arbitrary structures that can be optimized to improve learned representations. • We propose a scalable gradient-based algorithm to learn meta-parameters using a novel method to obtain meta-parameter gradients through the two-stage PT & FT process. Our gradient estimator composes a constant-memory implicit differentiation approximation for the longer PT stage and exact backpropagation through training for the shorter FT stage. • We show that our algorithm recovers optimal meta-parameters in toy experiments on synthetic data. • In two real-world experimental domains, we demonstrate our algorithm improves performance. Firstly, on a multitask PT benchmark over biological graph-structured data [28], using our method to optimize meta-parameters representing task weights improves performance by up to 3.9% AUROC. Secondly, for semi-supervised learning using SimCLR [8] over electrocardiography data, using our algorithm to optimize meta-parameters representing the weights of a data augmentation neural network improves performance by up to 1.9% AUROC. 2 Problem Setup and Preliminaries In this section, we define the meta-parameterized pre-training meta-learning problem, and compare it to traditional fine-tuning and pre-training. A full glossary of notation is in Appendix B, Table 3. Notation. Let the subscript • be a placeholder for either PT (pre-training) or FT (fine-tuning), X ⊆ Rd be our input domain, Y• and Ŷ• be the true and predicted output spaces for some model respectively, and Θ,Ψ•,Φ be spaces of parameters for models. We will use f• : X ; (Θ,Ψ•)→ Ŷ• to refer to a parametric model, with the semicolon separating the input space from the parameter spaces. We then define f• = f (head) • ◦ f (feat), such that f (feat)(·;θ ∈ Θ) is a feature extractor that is transferable across learning stages (e.g., pre-training to fine-tuning), and f (head)• (·;ψ ∈ Ψ•) is a stage-specific head that is not transferable. Given a data distribution x•, y• ∼ D•, parametric model f•, and loss function L• : Ŷ• × Y• → R, we will also define for convenience a corresponding expected loss L• : Θ,Ψ• → R via L•(θ,ψ•;D•) = ED• [L•(f•(x•;θ,ψ•), y•)]. We also adopt the convention that the output of the argmin operator is any arbitrary minimum, rather than the set of possible minima, to avoid complications in notation. 2.1 Problem Formulation Supervised Learning (Fig. 1A). In a fully-supervised setting (our fine-tuning domain), we are given a data distribution DFT, model f , and loss LFT. Using a learning algorithm AlgFT (e.g., SGD) that takes as input initial parameters θ(0)FT ,ψ (0) FT , our goal is to approximate the LFT-optimal parameters: θ∗FT,ψ ∗ FT = AlgFT(θ (0) FT ,ψ (0) FT ;DFT) ≈ argminθ∈Θ,ψ∈ΨFT LFT(θ,ψ;DFT) Pre-training (Fig. 1B). For tasks where data is scarce, we can additionally incorporate a pretraining step and approximate the optimal initial parameters for FT (i.e., the final pre-trained weights are used as initialization weights of the FT stage), again via an optimization algorithm AlgPT: θ∗PT = AlgPT(θ (0) PT ,ψ (0) PT ;DPT) ≈ argminθ∈Θ LFT(AlgFT(θ,ψ (0) FT ;DFT);DFT). 2 1We use the term meta-parameter since these structures do not directly affect inference of the final model after FT, but instead inform the process of learning this model (by modulating the PT process). 2Note that we discard the PT head ψ∗PT here as only the PT feature extractor θ ∗ PT is transferred. Figure (1) Meta-Parameterized Pre-Training. A paradigm where meta-parameters — rich, potentially high dimensional structures that generalize PT hyperparameters — are incorporated in PT to improve the learned representations. Meta-parameters are optimized in a meta-PT phase, using data from FT task(s) in a meta-FT dataset. The FT and meta-FT datasets are (potentially overlapping) samples from the FT data distribution. Meta-Parameterized PT (Fig. 1C). In Meta-Parameterized PT, we recognize that, in addition to taking as input the PT parameters θ, AlgPT is itself parameterized by a set of meta-parameters φ ∈ Φ: arbitrary, potentially high dimensional quantities that inform the structure of the algorithm directly. These could represent weighting strategies, data augmentation policies, or sampling processes. The optimal meta-parameters φ(opt) are the solution to the following meta-PT optimization problem: φ(opt) = argmin φ∈Φ LFT ( AlgFT ( AlgPT ( θ (0) PT ,ψ (0) PT ;DPT,φ ) ,ψ (0) FT ;DFT ) ;DFT ) . 2.2 Example: Multitask Meta-Parameterized Pre-Training To make our notation concrete, here we instantiate our setup for a multitask pre-training problem. Problem: Suppose we have a multitask classification dataset, (X × Y)N such that Y = Y1 × · · · × YK consists of labels for K distinct tasks. Of this full set of tasks, we are interested only in a subset of M tasks, S = {t1, . . . , tM} ⊆ {1, . . . ,K}. Supervised FT: Under supervised FT alone, we can directly average a cross-entropy loss LCE over only the tasks in S, LFT(ŷ,y) = 1M ∑M j=1 LCE(ŷ(tj), y(tj)), and then solve this problem via SGD. PT: If we assume that S is a random subset of the full set of tasks, we can introduce a PT stage over all tasks: LPT(ŷ,y) = 1K ∑K i=1 LCE(ŷ(i), y(i)), followed by FT on S alone. As S is a random subset, leveraging all tasks for PT is well motivated and may improve performance. Meta-Parameterized PT: In the case where T is not a random subset, the PT strategy described above is no longer well-motivated. However, using meta-parameterized PT, we can still effectively pre-train by introducing the meta-parameters that weight the tasks φ = [φ1 . . . φK ] and modulate the loss function LPT: LPT(ŷ,y;φ) = ∑K i=1 φiLCE(ŷ(i), yi). With optimal meta-parameters φ (opt), the PT stage will leverage only that subset of tasks that best informs the final FT performance. This setting mirrors our real-world experiment in Section 5. 3 Methods: Optimizing Meta-Parameters for Two-Stage Training We now introduce our gradient-based algorithm to optimize meta-parameters. We first describe how to efficiently approximate meta-parameter gradients through the two-stage PT and FT optimization. We then present our algorithm, and outline practical considerations when using it. 3.1 Efficient Computation of Meta-Parameter Gradients We begin by defining: g(φ;θ (0) PT ,ψ (0) PT ,ψ (0) FT ) = LFT ( AlgFT ( Parameter θPT︷ ︸︸ ︷ AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ),ψ (0) FT ;DFT )︸ ︷︷ ︸ Parameters θFT,ψFT ;DFT ) , (1) so that φ(opt) = argminφ∈Φ g(φ). We also define two best-response values: θ∗PT(φ) = AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ), θ∗FT(φ), ψ ∗ FT(φ) = AlgFT(θ ∗ PT(φ),ψ (0) FT ;DFT). We do not explicitly include the dependence of the best responses on the initialization values for notational convenience. With these defined, we now consider the desired gradient term, ∂g∂φ . Under our definitions, the direct partial derivatives ∂LFT∂φ and ∂AlgFT ∂φ are zero, so ∂g ∂φ reduces to a simple expression of the chain rule: ∂g ∂φ ∣∣∣∣ φ′ = ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ∗FT(φ ′),ψ∗FT(φ ′)︸ ︷︷ ︸ FT Loss Gradient × FT Best Response Jacobian︷ ︸︸ ︷ ∂AlgFT ∂θPT ∣∣∣∣ θ∗PT(φ ′) × ∂AlgPT ∂φ ∣∣∣∣ φ′︸ ︷︷ ︸ PT Best Response Jacobian . (2) The FT Loss Gradient term on the RHS of (2) is easily computed using backpropagation. Computing the other two terms is more involved, and we detail each below, beginning with the PT best response Jacobian. The full algorithm with both gradient estimation terms is provided in Algorithm 1. PT Best Response Jacobian ∂AlgPT∂φ . Using recent work in hyperparameter optimization with implicit differentiation [42], we re-express this term using the implicit function theorem (IFT). If we assume that θ∗PT(φ) = AlgPT ( θ (0) PT ;DPT,φ ) is a good approximation of argminθ∈Θ LPT (θ;DPT,φ) (i.e., the PT model converges to LPT-optimal parameters), then under certain smoothness and regularity assumptions on the PT parameters and meta-parameters, the IFT allows us to re-express ∂AlgPT∂φ as: ∂AlgPT ∂φ ∣∣∣∣ φ′ = − [ ∂2LPT ∂θPT ∂θ>PT ]−1 × ∂ 2LPT ∂θPT ∂φ > ∣∣∣∣ θ∗PT(φ ′),φ′ , (3) which is the product of the inverse Hessian and a matrix of mixed partial derivatives. Following [42], the inverse can be efficiently approximated using a truncated Neumann series. FT Best Response Jacobian ∂AlgFT∂θPT . First, note that without additional constraints on AlgFT, the FT best response Jacobian may be zero. This is because LFT has no functional dependence on the variable θPT and, if we assume the convergence point θ∗FT is stable (as we did for the PT best response Jacobian), this implies that the gradient of θ∗FT with respect to θPT would be zero. To enable effective learning, we must therefore either (1) impose restrictions on AlgFT to ensure there is a dependence between the initialization point and the final loss value (e.g., proximal regularization [55]) or (2) leverage methods that do not differentiate through AlgFT through convergence, as at non-converged points we will still observe nonzero LFT-gradients [29, 51]. Given that the FT phase often involves shorter optimization horizons than PT, we take approach 2 here, and iteratively update θFT for K steps. We first initialize the FT head ψ(0)FT and then compute: θ (0) FT = copy(θ ∗ PT) (init with PT solution, implicitly performing stop gradient) θ (k) FT ,ψ (k) FT = [ θ (k−1) FT , ψ (k−1) FT ] − ηFT ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ (k−1) FT ,ψ (k−1) FT k = 1, . . . ,K θ∗FT,ψ ∗ FT ≈ θ (K) FT ,ψ (K) FT , (4) and compute the gradient ∂AlgFT∂θPT ∣∣∣ θ∗PT(φ ′) by differentiating through this optimization.3 We can also choose to freeze the feature extractor parameters θFT and update only the head parameters ψFT during truncated FT, and use this to obtain meta-parameter gradients. This resembles linear evaluation, where a linear classifier is trained on top of fixed, pre-trained feature extractors [50, 3, 63]. Together, these two approximations allow for efficient computation of meta-parameter gradients. 3While Equation 4 uses standard gradient descent, we could use other differentiable optimizers (e.g., Adam). Algorithm 1 Gradient-based algorithm to learn meta-parameters. Notation defined in Appendix B, Table 3. Vector-Jacobian products (VJPs) can be efficiently computed by standard autodifferentiation. 1: Initialize PT parameters θ(init)PT ,ψ (init) PT ,ψ (0) FT and meta-parameters φ (0) 2: for n = 1, . . . , N iterations do 3: Initialize θ(0)PT = θ (init) PT and ψ (0) PT = ψ (init) PT . 4: for p = 1, . . . , P PT iterations do 5: [ θ (p) PT ,ψ (p) PT ] = [ θ (p−1) PT ,ψ (p−1) PT ] − ηPT ∂LPT ∂[θPT,ψPT] ∣∣∣∣ θ (p−1) PT ,ψ (p−1) PT 6: end for 7: Initialize FT encoder with PT solution: θ(0)FT = copy(θ (P ) PT ). 8: Approximate θ∗FT,ψ ∗ FT using Eq. 4. 9: Compute g1 = ∂LFT ∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT 10: Compute VJP g2 = g1 ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT using the unrolled learning step from line 8. 11: Approximate VJP ∂g∂φ ∣∣∣ φ(n−1) = g2 ∂AlgPT ∂φ ∣∣∣ φ(n−1) using the IFT (Eq. 3). 12: φ(n) = φ(n−1) − ηV ∂g∂φ ∣∣∣ φ(n−1) 13: Update PT initialization by setting: θ(init)PT = θ (P ) PT and ψ (init) PT = ψ (P ) PT . 14: end for 3.2 Our Algorithm and Practical Considerations By leveraging the above approximations, we obtain Algorithm 1 to optimize meta-parameters φ online during PT & FT of the base model. Note that AlgPT is explicitly written out as a sequence of gradient updates (lines 4-6 in Algorithm 1). We now discuss practical considerations when using this algorithm, with further details given in Appendix C. (1) Access to DFT and generalizing to new FT tasks: Solving the meta-PT problem requires availability of: the model f•, the PT data DPT, and the FT data DFT. In this work, we assume availability of the model and PT dataset, but since assuming access to the complete FT dataset at meta-PT time is more restrictive, we study two scenarios: Full FT Access, where all FT data that we expect to encounter is available at meta-PT time, and Partial FT Access, where the FT data available at meta-PT time is only a sample from a distribution of FT data that we may encounter later. Full FT Access occurs in settings like semi-supervised learning, where we are given a large unlabelled PT dataset and a small labelled FT dataset and our goal is to achieve the best possible performance by leveraging these two fixed datasets [68, 73, 25, 24, 8, 9]. Partial FT Access occurs when our goal is to learn transferable representations: at meta-PT time, we might have limited knowledge of FT tasks or data. In evaluating this scenario, we examine generalizability to new FT tasks, given only small amounts of FT data/task availability at meta-PT time, demonstrating that even very limited FT access can be sufficient for effective meta-parameter optimization [11, 45, 56, 28]. (2) DFT splits: In practice, we have access to finite datasets and use minibatches, rather than true datagenerating processes. Following standard convention, we splitDFT into two subsets for meta-learning: D(tr)FT and D (val) FT (independent of any held-out DFT testing split), and define the FT data available at meta-PT time as D(Meta)FT = D (tr) FT ∪ D (val) FT . We use D (tr) FT for the computation of ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT and ∂AlgPT ∂φ ∣∣∣ φ(n−1) and D(val)FT for the computation of ∂LFT∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT in Algorithm 1. (3) Online updates: Given that PT phases often involve long optimization horizons, for computational efficiency, we update θPT andψPT online rather than re-initializing them at every meta-iteration (see Algorithm 1). FT phases are often shorter so we could in theory re-initialize ψFT at each meta-iteration, as is presented in Algorithm 1. However, it is more computationally efficient to also optimize this online, and we follow this approach in our experiments. A description of the algorithm with these details in Appendix C. Note that prior work [67] has suggested that online optimization of certain hyperparameters (e.g., learning rates) using short horizons may yield suboptimal solutions. We comment on this in Appendix C, study this effect for our algorithm in synthetic experiments in Appendix E, and in real-world experiments on self-supervised learning in Appendix G, revealing it is not a significant concern. (4) Computational tractability: Our method can scale to large encoder models and highdimensional meta-parameters, despite the complexity of the two-stage PT & FT process. This is because: (i) meta-parameters are optimized jointly with the base model parameters; (ii) using the IFT to obtain gradients has similar time and memory complexity to one iteration of training [42]; (iii) the FT best response Jacobian can be approximated efficiently using a small number of unrolled optimization steps K, and by only unrolling the FT head of the network. In our real-world experiments (Sections 5 and 6), meta-parameterized PT has less than twice the time cost of standard PT. Further details on time and memory cost are provided in Appendices F and G. (5) Setting optimizer parameters: Learning rates and momentum values can impact the efficacy of the algorithm. A discussion on how to set them in practice is provided in Appendix D. 4 Synthetic Experiments We validate that our algorithm recovers optimal low and high dimensional meta-parameters in two synthetic MNIST experiments with Full FT Access. Further details and results are provided in Appendix E, including a study of how our method performs comparably to differentiating exactly through the entire learning process of PT & FT, without approximations. First, we optimize low dimensional meta-parameters characterizing a data augmentation scheme. We tune a 1-D meta-parameter φ representing the mean of a Normal distribution N (φ, 12) from which we sample rotation augmentations to apply to PT images. FT images undergo rotations from a Normal distribution N (µFT, 12) with µFT = 90◦; we therefore expect that φ should converge to near µFT. Using Algorithm 1 to optimize φ we find that the mean error in the optimized meta-parameter over 10 different initializations is small: 7.2± 1.5◦, indicating efficacy of the algorithm. Next, we consider learning high dimensional meta-parameters that characterize a PT per-example weighting scheme. The PT dataset contains some examples that have noisy labels, and FT examples all have clean labels. The meta-parameters are the parameters of a neural network that assigns importance weights to each PT example, which is used to weight the loss on that example during PT. We use Algorithm 1 again to optimize φ, over 10 random initializations, finding the ratio of assigned importance weights between clean label PT examples and noisy label PT examples is greater than 102. This is expected since the noisy label classes may worsen the quality of the PT model and so should be down-weighted. 5 Meta-Parameterized Multitask Pre-Training for Graph Neural Networks We consider optimizing PT task weights for a multitask PT & FT problem of predicting the presence of protein functions (multitask binary classification) given graph-structured biological data as input. We have two experimental goals: first, in the Full FT Access setting, where methods are given access to all FT data at PT time, we evaluate whether optimizing task weighting meta-parameters can improve predictive performance on the FT tasks. Second, motivated by how in typical transfer learning problems, new tasks or labels not available at PT time may become available at FT time, we study the Partial FT Access setting, investigating how our method performs when it only sees limited FT tasks at PT time. In both settings, our method outperforms baselines. 5.1 Problem Setup Dataset and Task. We consider the transfer learning benchmark introduced in [28], where the prediction problem at both PT and FT is multitask binary classification: predicting the presence/absence of specific protein functions (y) given a Protein-Protein Interaction (PPI) network as input (rep- resented as a graph x). The PT dataset has pairs DPT = {(xi, yi)}|DPT|i=1 , where y ∈ {0, 1}5000 characterizes the presence/absence of 5000 particular protein functions. The FT dataset has pairs DFT = {(xi, yi)}|DFT|i=1 , where y ∈ {0, 1}40 now characterizes the presence/absence of 40 different protein functions. Further dataset details in Appendix F. Meta-Parameterized Multitask PT. To define a meta-parameterized PT scheme, we let metaparameters φ ∈ R5000 be weights for the binary PT tasks. Then, we define a PT loss incorporating the weights: LPT = 15000 ∑5000 i=1 2 σ(φi) LCE(fPT(x;θPT,ψPT)i, yi),with i indexing the tasks, σ(·) representing the sigmoid function (to ensure non-negativity and clamp the range of the weights), and LCE denoting the binary cross-entropy loss. With this loss defined, we use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [28] for computing the relevant gradient terms. Baselines. Motivated by our goals, we compare with the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • Graph Supervised PT: As explored in prior work on this domain [28], perform multitask super- vised PT with DPT. This corresponds to setting all task weights to 1: φi = 1, i = 1, . . . , 5000. • CoTrain: A common baseline that makes use of the FT data available during PT [70] (like meta- parameterized PT). We PT a model with 5000+40 outputs (covering the space of PT and FT labels) jointly on both DPT and DFT. We do so by alternating gradient updates on batches sampled from each dataset in turn. Further details are in Appendix F. • CoTrain + PCGrad: An extension of CoTrain, where we leverage the method PCGrad [72] to perform gradient projection and prevent destructive gradient interference between updates from DPT and DFT. Further details and variants we tried are in Appendix F. Experimental Details. We use a standardized setup to facilitate comparisons. Following [28], all methods use the Graph Isomorphism Network architecture [69], undergo PT for 100 epochs, and FT for 50 epochs, over 5 random seeds, using early stopping based on validation set performance. During FT, we initialize a new FT network head and either FT the whole network or freeze the PT feature extractor and learn the FT head alone (Linear Evaluation [50]). We report results for the strategy that performed best (full results in the appendix). We consider two experimental scenarios: (1) Full FT Access: Provide methods full access to DPT and DFT at PT time (D(Meta)FT = DFT) and evaluate on the full set of 40 FT tasks; (2) Partial FT Access: Limit the number of FT tasks seen at PT time, by letting D(Meta)FT include only 30 of the 40 FT tasks. At FT time, models are fine-tuned on the held-out 10 tasks not in D(Meta)FT . We use a 4-fold approach where we leave out 10 of the 40 FT tasks in turn, and examine performance across these 10 held-out tasks, over the folds. 5.2 Results Key Findings. By optimizing PT task weights, meta-parameterized multitask PT improves performance on the FT problem of predicting presence/absence of protein functions given a protein-protein interaction graph as input. Performance improvements are also seen when generalizing to new FT tasks (protein functions), unseen at meta-PT time. Table 1 presents quantitative results for the two experimental settings described. For the No PT and Graph Supervised PT baselines, we re-implement the methods from [28], obtaining improved results (full comparison in Appendix Table 5). In both full and partial FT access settings, meta-parameterized PT improves significantly on other methods, indicating that optimizing meta-parameters can improve predictive performance generally, and be effective even when new, related tasks are considered at evaluation time. Interestingly, we observe that CoTrain and CoTrain + PCGrad obtain relatively poor performance compared to other baselines; this could be because the methods overfit to the FT data during PT. Further analysis of this is presented in Appendix F. Further experiments. In Appendix F, we study another partial FT access scenario with smaller D(Meta)FT , setting ∣∣∣D(Meta)FT ∣∣∣ = 0.5 |DFT|, and find that meta-parameterized PT again outperforms other methods. (Table 7). We also examine another meta-parameter learning baseline, namely a version of CoTrain where we optimize task weights using a traditional hyperparameter optimization algorithm [42] jointly with the main model. We find that our method outperforms this baseline also (Table 5). Method AUC (D(Meta)FT = DFT) AUC (D (Meta) FT excludes tasks) No PT 66.6 ± 0.7 65.8 ± 2.5 Graph Supervised PT 74.7 ± 0.1 74.8 ± 1.8 CoTrain 70.2 ± 0.3 69.3 ± 1.8 CoTrain + PCGrad 69.4 ± 0.2 68.1 ± 2.3 Meta-Parameterized PT 78.6 ± 0.1 77.0 ± 1.3 Table (1) Meta-Parameterized PT improves predictive performance over baselines. Table showing mean AUC and standard error for two evaluation settings. When provided all FT data at PT time (first results column), meta-parameterized PT significantly improves predictive performance. In a more challenging setting when D(Meta)FT excludes FT tasks (10 of the 40 available tasks are held-out), evaluating mean AUC/standard error across four folds with each set of 10 FT tasks held out in turn, meta-parameterized PT again obtains the best performance: it is effective even with partial information about the downstream FT tasks. Analysis of learned structures. In Appendix F, we conduct further analysis and study the effect of various PT strategies on the pre-trained representations (Figure 3), finding intuitive patterns of similarity between different methods. We also examine the learned task weights (Figure 4), and examine performance on a per-FT task basis with/without meta-parameterized PT (Figure 5), finding little evidence of negative transfer. 6 Meta-Parameterized SimCLR for Semi-Supervised Learning with ECGs We now explore a second real-world application of our method: optimizing a data augmentation policy for self-supervised PT with SimCLR [8, 9] on electrocardiograms (ECGs). SimCLR is a popular self-supervised PT method that leverages data augmentations to define a contrastive PT objective (details in Appendix G.1). The choice/strength of the augmentations used significantly impacts the effectiveness of the algorithm [8]. In settings where relevant augmentations are known (e.g., natural images), SimCLR is readily applicable; however, for ECGs, effective augmentations are less clear, motivating the use of our algorithm to optimize the augmentation pipeline. We have two experimental goals. Firstly, we examine the typical semi-supervised learning setting of Full FT Access: we explore whether optimizing the augmentations in SimCLR PT can improve performance on the supervised FT task of detecting pathologies from ECGs, given access to all FT data at meta-PT time. Secondly, to study the data efficiency of our method, we consider the Partial FT Access setting and explore performance given access to limited FT data at meta-PT time. We find that our method improves the performance of SimCLR, and that it is effective even with very limited amounts of FT data provided at meta-PT time. 6.1 Problem Setup Dataset and Task. We construct a semi-supervised learning (SSL) problem using PTB-XL [64, 20], an open-source dataset of electrocardiogram (ECG) data. Let the model input at both PT and FT time be denoted by x, which represents a 12-lead (or channel) ECG sampled at 100 Hz for 10 seconds resulting in a 1000 × 12 signal. Our goal is to pre-train a model fPT on an unlabeled PT dataset of ECGs DPT = {xi}|DPT|i=1 using SimCLR PT [8], and then fine-tune it on the labeled FT dataset DFT = {(xi, yi)}|DFT|i=1 , where the FT labels y ∈ {0, 1}5 encode whether the signal contains certain features indicative of particular diseases/pathologies. Further dataset details in Appendix G. ECG Data Augmentations. To augment each ECG for SimCLR (example in Appendix G, Figure 6), we apply three transformations in turn (based on prior work in time series augmentation [30, 66]): 1. Random cropping: A randomly selected portion of the signal is zeroed out. 2. Random jittering: IID Gaussian noise is added to the signal. 3. Random temporal warping: The signal is warped with a random, diffeomorphic temporal transformation. This is formed by sampling from a zero mean, fixed variance Gaussian at each temporal location in the signal to obtain a velocity field, and then integrating and smoothing (following [4, 5]) to generate a temporal displacement field, which is applied to the signal. Test AUC at different FT dataset sizes |DFT| FT dataset size |DFT| 100 250 500 1000 2500 No PT 71.5 ± 0.7 76.1 ± 0.3 78.7 ± 0.3 82.0 ± 0.2 84.5 ± 0.2 SimCLR 74.6 ± 0.4 76.5 ± 0.3 79.8 ± 0.3 82.2 ± 0.3 85.8 ± 0.1 Meta-Parameterized SimCLR 76.1 ± 0.5 77.8 ± 0.4 81.7 ± 0.2 84.0 ± 0.3 86.7 ± 0.1 Table (2) Meta-Parameterized SimCLR obtains improved semi-supervised learning performance. Table showing mean AUC/standard error over seeds across 5 FT binary classification tasks for baselines and meta-parameterized SimCLR at different sizes of DFT, with D(Meta)FT = DFT. We observe improvements in performance with meta-parameterized SimCLR, which optimizes the augmentation pipeline. Meta-Parameterized SimCLR. To construct a meta-parameterized SimCLR PT scheme, we instantiate meta-parameters φ as the weights of a neural network w(x;φ) that takes in an input signal and outputs the warp strength: the variance of the Gaussian that is used to obtain the velocity field for temporal warping. This parameterization permits signals to be warped more/less aggressively depending on their individual structure. With this definition, the SimCLR PT loss is directly a function of the meta-parameters, and we can use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [64] for computing the relevant gradient terms. Baselines. Our experimental goals suggest the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • SimCLR: Pre-train a model using SimCLR with the above three augmentations without learning per-example temporal warping strengths. Experimental Details. We standardize the experimental setup to facilitate comparisons. All methods use a 1D CNN based on a ResNet-18 [23] architecture. The temporal warping network w(x;φ) is a four layer 1D CNN. SimCLR PT takes place for 50 epochs for all methods, over three PT seeds. At evaluation time, for all methods, we initialize a new FT network head over the PT network feature extractor and FT the whole network for 200 epochs, over five FT seeds. Validation set AUC is used for early stopping. We consider two experimental settings: (1) Full FT Access, standard SSL: consider different sizes of the labelled FT dataset DFT and make all the FT data available at meta-PT time, D(Meta)FT = DFT; and (2) Partial FT Access, examining data efficiency of our algorithm: SSL when only limited FT data is available at meta-PT time: D(Meta)FT ⊆ DFT. We evaluate performance across the 5 binary classification tasks in both settings. Further details are provided in Appendix G. 6.2 Results Key Findings. By optimizing the data augmentation policy used in SimCLR PT, meta-parameterized SimCLR improves performance on the FT problem of detecting pathologies from ECG data. Even a small amount of FT data provided at meta-PT time can lead to improved FT performance. Table 2 shows results for the Full FT Access setting, D(Meta)FT = DFT: mean AUC/standard error over seeds across the 5 FT binary classification tasks at different sizes of DFT. We observe that meta-parameterized SimCLR improves on other baselines in all settings. Note that while these gains are modest, they are obtained with simple augmentation policies; our method may yield further improvements if applied to policies with more scope to specialize the augmentations. Next, we consider the Partial FT Access scenario where D(Meta)FT ⊆ DFT, which is relevant when we only have a small amount of FT data at meta-PT time. Fixing |DFT| = 500, we find that with |D(Meta)FT | as small as 50, we obtain test AUC of 81.3 ± 0.5, compared to 79.8 ± 0.3 with no optimization of augmentations: this shows that even small |D(Meta)FT | appear to be sufficient for meta-parameter learning. Further results showing performance curves varying |D(Meta)FT | are in Appendix G. Further experiments. In Appendix G, we study other aspects of our method on this domain, including: (1) Exploring different values of K, the number of FT steps differentiated through when obtaining meta-parameter gradients; and (2) Examining a meta-parameter learning baseline where augmentations are optimized for supervised learning, using the method in [42], and then applied to semi-supervised learning (to compare how optimizing augmentations for supervised learning compares to optimizing them for semi-supervised learning). We find that our method is not very sensitive to the value of K (provided K > 0), and that it outperforms this additional baseline. 7 Related Work Gradient-based hyperparameter optimization (HO): Gradient-based HO roughly falls into two camps. The simpler and less scalable approach differentiates through training [12, 44]. The other approach assumes that optimization reaches a fixed point, and approximates the best-response Jacobian [7, 41, 43, 42]. Neither of these approaches can be straightforwardly applied to scalably differentiate through two stages of optimization (PT & FT). Direct differentiation through both stages would be too memory-intensive. Approximating the best-response Jacobian using the IFT as in [42] twice is feasible, but requires changing the FT objective to include a proximal term [55], and tuning two sets of interacting approximations. Instead, we compose a constant-memory IFT approximation for the lengthy PT stage with an exact backprop-through-training for the shorter FT stage. Applications of Nested Optimization: Many prior works frame learning as nested optimization, including few-shot learning [16, 1, 17, 55, 21, 58, 53, 75, 31, 38], neural network teaching [14, 15, 62, 54], learning data augmentation and reweighting strategies [32, 22, 57, 60, 29], and auxiliary task learning [49, 51, 39]. The majority of this work studies nested optimization in the standard one-stage supervised learning paradigm, unlike our setting: the two-stage PT & FT problem. The most closely related works to ours are [70], where PT task weights are learned for a multitask PT problem using electronic health record data, and [71], where a masking policy is learned for masked language modelling PT. In contrast to our work, which introduces the more general framing of meta-parameter optimization, [70] and [71] are focused only on specific instantiations of meta-parameters as task weights and masking policies. The learning algorithms in these works either: differentiate directly through truncated PT & FT [71] (which may not be scalable to longer PT/large encoder models), or leverage extensive first-order approximations [70], unlike our more generally applicable approach. 8 Scope and Limitations Our gradient-based algorithm applies in situations where we want to optimize (potentially highdimensional) PT hyperparameters, or meta-parameters, and have access to a model, PT data, and FT data. We demonstrated that even limited FT data availability can be sufficient to guide metaparameter learning; however, our method would not apply when no FT data at all is available at meta-PT time, or if the model or PT data were not available. Our algorithm requires meta-parameters to be differentiable, and cannot directly be used to optimize meta-parameters that do not affect the PT optimization landscape (e.g., PT learning rates). 9 Conclusion In this work, we studied the problem of optimizing high-dimensional pre-training (PT) hyperparameters, or meta-parameters. We formalized Meta-Parameterized Pre-Training, a variant of standard PT incorporating these meta-parameters, and proposed a gradient-based algorithm to efficiently learn meta-parameters by approximately differentiating through the two-stage PT & FT learning process. In experiments, we used our algorithm to improve predictive performance on two real-world PT tasks: multitask PT with graph structured data [28], and self-supervised contrastive PT on electrocardiogram signals using SimCLR [8]. Future work could apply our method to learn other potential instantiations of meta-parameters, such as learned auxiliary tasks and noise models. Societal Impact. Our contribution in this work is methodological, namely a new algorithm to optimize high-dimensional pre-training hyperparameters. We do not expect there to be direct negative societal impacts of this contribution. However, to evaluate our method, we considered an experimental domain using healthcare data. Given the high risk nature of this domain, before use in real-world settings, the method should be validated in retrospective and prospective studies. This is to detect any failure modes and identify potential harm that may come from deploying it. Acknowledgements This work was supported in part by funds from Quanta Computer, Inc. The authors thank the members of the Clinical and Applied Machine Learning group at MIT and Paul Vicol for helpful feedback.
1. What is the focus of the paper regarding hyperparameter optimization? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like MAML? 3. How does the reviewer assess the clarity, quality, significance, and originality of the paper's content? 4. What are the limitations of the proposed approach, particularly in terms of differentiable hyperparameters? 5. Are there any questions or concerns regarding the notation and terminology used in the paper, such as the use of uppercase theta, psi, and phi?
Summary Of The Paper Review
Summary Of The Paper The paper describes a gradient-based hyperparameter optimization for differentiable pre-training hyperparameters in a pre-training and fine-tuning setup. The proposed approach approximately differentiates through the two-stage learning process. The authors show the benefit of their approach in two experiments, where they outperform multiple baselines. Review Originality: In my opinion, this kind of meta-learning on two-stage setups is not new (see ref. [67] and [68]) but the more general approach is novel. It feels a bit like MAML [15] for differentiable hyperparameters (MAML is mentioned in the related work). The difference to related work is described and related work is cited. Clarity: I had issues following this paper. While in the abstract and introduction the authors talk about “hyperparameters” they mention only in the limitations section that the hyperparameters have to be differentiable. When I read hyperparameters of neural network training, I think mainly on parameters like the learning rate. The abstract promises an HPO method for hyperparameters in general without mentioning this limitation. In the second section in the paragraph “Notation”, the authors introduce uppercase theta, uppercase psi, and uppercase phi as “spaces of parameters for models” without any further description on the difference between these “spaces”. Two pages later in line 123, the authors name phi “encoder parameters” and psi “decoder parameters” but without mentioning that they assume an encoder-decoder setup. In the abstract and introduction, the authors write about optimizing neural networks while in the problem setup they describe the more general case of “parametric model”. I wasn’t able to follow their algorithm due to the lack of knowledge about theta, psi, and phi. In the experiment section, it is unclear where the definition of the loss comes from and how P and K are tuned. I would propose to clarify what kind of hyperparameters are addressed by this approach and rework the problem formulation and notation. Quality: I can’t assess the methods since I didn’t understand the algorithm in detail due to the lack of knowledge about theta, psi, and phi. The method is evaluated in two experiments with good results. Significance: In my opinion, the experiments are very specialized for a general tone in the abstract and introduction. I couldn't follow the notation easily. Even if I am not a domain expert for gradient-based hyperparameter optimization this shouldn’t be the case in my opinion. In the current state of this paper, I find it hard to make use of the approach.
NIPS
Title Meta-learning to Improve Pre-training Abstract Pre-training (PT) followed by fine-tuning (FT) is an effective method for training neural networks, and has led to significant performance improvements in many domains. PT can incorporate various design choices such as task and data reweighting strategies, augmentation policies, and noise models, all of which can significantly impact the quality of representations learned. The hyperparameters introduced by these strategies therefore must be tuned appropriately. However, setting the values of these hyperparameters is challenging. Most existing methods either struggle to scale to high dimensions, are too slow and memory-intensive, or cannot be directly applied to the two-stage PT and FT learning process. In this work, we propose an efficient, gradient-based algorithm to meta-learn PT hyperparameters. We formalize the PT hyperparameter optimization problem and propose a novel method to obtain PT hyperparameter gradients by combining implicit differentiation and backpropagation through unrolled optimization. We demonstrate that our method improves predictive performance on two real-world domains. First, we optimize high-dimensional task weighting hyperparameters for multitask pre-training on protein-protein interaction graphs and improve AUROC by up to 3.9%. Second, we optimize a data augmentation neural network for self-supervised PT with SimCLR on electrocardiography data and improve AUROC by up to 1.9%. 1 Introduction A popular and important learning paradigm for neural networks is pre-training (PT) followed by finetuning (FT), an approach commonly used in transfer learning [13, 59, 19, 27, 52, 11, 37, 74, 35, 28], and semi-supervised learning [9, 8, 24]. This paradigm has led to performance improvements in many domains, including computer vision [13, 59, 19, 37, 74, 35], natural language processing [27, 52, 11, 40, 34], graph structured prediction [28], and clinical machine learning [45, 46, 2, 48], and is especially helpful in settings where downstream tasks have limited training data. The PT & FT paradigm introduces high-dimensional, complex PT hyperparameters, such as parameterized data augmentation policies used in contrastive representation learning [8, 22] or the use of task, class, or instance weighting variables in multi-task PT to avoid negative transfer [70]. These hyperparameters can significantly affect the quality of pre-trained models [8], and thus finding techniques to set their values optimally is an important area of research. Choosing optimal PT hyperparameter values is challenging, and existing methods do not work well. Simple approaches such as random or grid search are inefficient since evaluating a hyperparameter setting requires performing the full, two-stage PT & FT optimization, which may be prohibitively computationally expensive. Gradient-free approaches, such as Bayesian optimization or evolutionary algorithms [33, 61, 47], are also limited in how well they scale to this setting. Gradient-based 35th Conference on Neural Information Processing Systems (NeurIPS 2021). approaches [44, 41, 43, 42] can be used online to jointly learn hyperparameters and model parameters and can scale to millions of hyperparameters [42], but typically deal with a standard single-stage learning problem (e.g., normal supervised learning) and are therefore not directly applicable to the two-stage PT & FT learning problem. In this work, we address this gap and propose a method for high-dimensional PT hyperparameter optimization. We first formalize a variant of the PT & FT paradigm, which we call meta-parameterized pre-training (Figure 1), where meta-parameters refer to arbitrary PT hyperparameters or parameterizable architectural choices that can be optimized to improve the learned representations.1 We outline a meta-learning problem characterizing the optimal meta-parameters propose a gradient-based method to learn meta-parameters. Our contributions are: • We formalize meta-parameterized pre-training, a variant of the pre-training and fine-tuning (PT & FT) paradigm where PT is augmented to incorporate meta-parameters: arbitrary structures that can be optimized to improve learned representations. • We propose a scalable gradient-based algorithm to learn meta-parameters using a novel method to obtain meta-parameter gradients through the two-stage PT & FT process. Our gradient estimator composes a constant-memory implicit differentiation approximation for the longer PT stage and exact backpropagation through training for the shorter FT stage. • We show that our algorithm recovers optimal meta-parameters in toy experiments on synthetic data. • In two real-world experimental domains, we demonstrate our algorithm improves performance. Firstly, on a multitask PT benchmark over biological graph-structured data [28], using our method to optimize meta-parameters representing task weights improves performance by up to 3.9% AUROC. Secondly, for semi-supervised learning using SimCLR [8] over electrocardiography data, using our algorithm to optimize meta-parameters representing the weights of a data augmentation neural network improves performance by up to 1.9% AUROC. 2 Problem Setup and Preliminaries In this section, we define the meta-parameterized pre-training meta-learning problem, and compare it to traditional fine-tuning and pre-training. A full glossary of notation is in Appendix B, Table 3. Notation. Let the subscript • be a placeholder for either PT (pre-training) or FT (fine-tuning), X ⊆ Rd be our input domain, Y• and Ŷ• be the true and predicted output spaces for some model respectively, and Θ,Ψ•,Φ be spaces of parameters for models. We will use f• : X ; (Θ,Ψ•)→ Ŷ• to refer to a parametric model, with the semicolon separating the input space from the parameter spaces. We then define f• = f (head) • ◦ f (feat), such that f (feat)(·;θ ∈ Θ) is a feature extractor that is transferable across learning stages (e.g., pre-training to fine-tuning), and f (head)• (·;ψ ∈ Ψ•) is a stage-specific head that is not transferable. Given a data distribution x•, y• ∼ D•, parametric model f•, and loss function L• : Ŷ• × Y• → R, we will also define for convenience a corresponding expected loss L• : Θ,Ψ• → R via L•(θ,ψ•;D•) = ED• [L•(f•(x•;θ,ψ•), y•)]. We also adopt the convention that the output of the argmin operator is any arbitrary minimum, rather than the set of possible minima, to avoid complications in notation. 2.1 Problem Formulation Supervised Learning (Fig. 1A). In a fully-supervised setting (our fine-tuning domain), we are given a data distribution DFT, model f , and loss LFT. Using a learning algorithm AlgFT (e.g., SGD) that takes as input initial parameters θ(0)FT ,ψ (0) FT , our goal is to approximate the LFT-optimal parameters: θ∗FT,ψ ∗ FT = AlgFT(θ (0) FT ,ψ (0) FT ;DFT) ≈ argminθ∈Θ,ψ∈ΨFT LFT(θ,ψ;DFT) Pre-training (Fig. 1B). For tasks where data is scarce, we can additionally incorporate a pretraining step and approximate the optimal initial parameters for FT (i.e., the final pre-trained weights are used as initialization weights of the FT stage), again via an optimization algorithm AlgPT: θ∗PT = AlgPT(θ (0) PT ,ψ (0) PT ;DPT) ≈ argminθ∈Θ LFT(AlgFT(θ,ψ (0) FT ;DFT);DFT). 2 1We use the term meta-parameter since these structures do not directly affect inference of the final model after FT, but instead inform the process of learning this model (by modulating the PT process). 2Note that we discard the PT head ψ∗PT here as only the PT feature extractor θ ∗ PT is transferred. Figure (1) Meta-Parameterized Pre-Training. A paradigm where meta-parameters — rich, potentially high dimensional structures that generalize PT hyperparameters — are incorporated in PT to improve the learned representations. Meta-parameters are optimized in a meta-PT phase, using data from FT task(s) in a meta-FT dataset. The FT and meta-FT datasets are (potentially overlapping) samples from the FT data distribution. Meta-Parameterized PT (Fig. 1C). In Meta-Parameterized PT, we recognize that, in addition to taking as input the PT parameters θ, AlgPT is itself parameterized by a set of meta-parameters φ ∈ Φ: arbitrary, potentially high dimensional quantities that inform the structure of the algorithm directly. These could represent weighting strategies, data augmentation policies, or sampling processes. The optimal meta-parameters φ(opt) are the solution to the following meta-PT optimization problem: φ(opt) = argmin φ∈Φ LFT ( AlgFT ( AlgPT ( θ (0) PT ,ψ (0) PT ;DPT,φ ) ,ψ (0) FT ;DFT ) ;DFT ) . 2.2 Example: Multitask Meta-Parameterized Pre-Training To make our notation concrete, here we instantiate our setup for a multitask pre-training problem. Problem: Suppose we have a multitask classification dataset, (X × Y)N such that Y = Y1 × · · · × YK consists of labels for K distinct tasks. Of this full set of tasks, we are interested only in a subset of M tasks, S = {t1, . . . , tM} ⊆ {1, . . . ,K}. Supervised FT: Under supervised FT alone, we can directly average a cross-entropy loss LCE over only the tasks in S, LFT(ŷ,y) = 1M ∑M j=1 LCE(ŷ(tj), y(tj)), and then solve this problem via SGD. PT: If we assume that S is a random subset of the full set of tasks, we can introduce a PT stage over all tasks: LPT(ŷ,y) = 1K ∑K i=1 LCE(ŷ(i), y(i)), followed by FT on S alone. As S is a random subset, leveraging all tasks for PT is well motivated and may improve performance. Meta-Parameterized PT: In the case where T is not a random subset, the PT strategy described above is no longer well-motivated. However, using meta-parameterized PT, we can still effectively pre-train by introducing the meta-parameters that weight the tasks φ = [φ1 . . . φK ] and modulate the loss function LPT: LPT(ŷ,y;φ) = ∑K i=1 φiLCE(ŷ(i), yi). With optimal meta-parameters φ (opt), the PT stage will leverage only that subset of tasks that best informs the final FT performance. This setting mirrors our real-world experiment in Section 5. 3 Methods: Optimizing Meta-Parameters for Two-Stage Training We now introduce our gradient-based algorithm to optimize meta-parameters. We first describe how to efficiently approximate meta-parameter gradients through the two-stage PT and FT optimization. We then present our algorithm, and outline practical considerations when using it. 3.1 Efficient Computation of Meta-Parameter Gradients We begin by defining: g(φ;θ (0) PT ,ψ (0) PT ,ψ (0) FT ) = LFT ( AlgFT ( Parameter θPT︷ ︸︸ ︷ AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ),ψ (0) FT ;DFT )︸ ︷︷ ︸ Parameters θFT,ψFT ;DFT ) , (1) so that φ(opt) = argminφ∈Φ g(φ). We also define two best-response values: θ∗PT(φ) = AlgPT(θ (0) PT ,ψ (0) PT ;DPT,φ), θ∗FT(φ), ψ ∗ FT(φ) = AlgFT(θ ∗ PT(φ),ψ (0) FT ;DFT). We do not explicitly include the dependence of the best responses on the initialization values for notational convenience. With these defined, we now consider the desired gradient term, ∂g∂φ . Under our definitions, the direct partial derivatives ∂LFT∂φ and ∂AlgFT ∂φ are zero, so ∂g ∂φ reduces to a simple expression of the chain rule: ∂g ∂φ ∣∣∣∣ φ′ = ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ∗FT(φ ′),ψ∗FT(φ ′)︸ ︷︷ ︸ FT Loss Gradient × FT Best Response Jacobian︷ ︸︸ ︷ ∂AlgFT ∂θPT ∣∣∣∣ θ∗PT(φ ′) × ∂AlgPT ∂φ ∣∣∣∣ φ′︸ ︷︷ ︸ PT Best Response Jacobian . (2) The FT Loss Gradient term on the RHS of (2) is easily computed using backpropagation. Computing the other two terms is more involved, and we detail each below, beginning with the PT best response Jacobian. The full algorithm with both gradient estimation terms is provided in Algorithm 1. PT Best Response Jacobian ∂AlgPT∂φ . Using recent work in hyperparameter optimization with implicit differentiation [42], we re-express this term using the implicit function theorem (IFT). If we assume that θ∗PT(φ) = AlgPT ( θ (0) PT ;DPT,φ ) is a good approximation of argminθ∈Θ LPT (θ;DPT,φ) (i.e., the PT model converges to LPT-optimal parameters), then under certain smoothness and regularity assumptions on the PT parameters and meta-parameters, the IFT allows us to re-express ∂AlgPT∂φ as: ∂AlgPT ∂φ ∣∣∣∣ φ′ = − [ ∂2LPT ∂θPT ∂θ>PT ]−1 × ∂ 2LPT ∂θPT ∂φ > ∣∣∣∣ θ∗PT(φ ′),φ′ , (3) which is the product of the inverse Hessian and a matrix of mixed partial derivatives. Following [42], the inverse can be efficiently approximated using a truncated Neumann series. FT Best Response Jacobian ∂AlgFT∂θPT . First, note that without additional constraints on AlgFT, the FT best response Jacobian may be zero. This is because LFT has no functional dependence on the variable θPT and, if we assume the convergence point θ∗FT is stable (as we did for the PT best response Jacobian), this implies that the gradient of θ∗FT with respect to θPT would be zero. To enable effective learning, we must therefore either (1) impose restrictions on AlgFT to ensure there is a dependence between the initialization point and the final loss value (e.g., proximal regularization [55]) or (2) leverage methods that do not differentiate through AlgFT through convergence, as at non-converged points we will still observe nonzero LFT-gradients [29, 51]. Given that the FT phase often involves shorter optimization horizons than PT, we take approach 2 here, and iteratively update θFT for K steps. We first initialize the FT head ψ(0)FT and then compute: θ (0) FT = copy(θ ∗ PT) (init with PT solution, implicitly performing stop gradient) θ (k) FT ,ψ (k) FT = [ θ (k−1) FT , ψ (k−1) FT ] − ηFT ∂LFT ∂ [θFT, ψFT] ∣∣∣∣ θ (k−1) FT ,ψ (k−1) FT k = 1, . . . ,K θ∗FT,ψ ∗ FT ≈ θ (K) FT ,ψ (K) FT , (4) and compute the gradient ∂AlgFT∂θPT ∣∣∣ θ∗PT(φ ′) by differentiating through this optimization.3 We can also choose to freeze the feature extractor parameters θFT and update only the head parameters ψFT during truncated FT, and use this to obtain meta-parameter gradients. This resembles linear evaluation, where a linear classifier is trained on top of fixed, pre-trained feature extractors [50, 3, 63]. Together, these two approximations allow for efficient computation of meta-parameter gradients. 3While Equation 4 uses standard gradient descent, we could use other differentiable optimizers (e.g., Adam). Algorithm 1 Gradient-based algorithm to learn meta-parameters. Notation defined in Appendix B, Table 3. Vector-Jacobian products (VJPs) can be efficiently computed by standard autodifferentiation. 1: Initialize PT parameters θ(init)PT ,ψ (init) PT ,ψ (0) FT and meta-parameters φ (0) 2: for n = 1, . . . , N iterations do 3: Initialize θ(0)PT = θ (init) PT and ψ (0) PT = ψ (init) PT . 4: for p = 1, . . . , P PT iterations do 5: [ θ (p) PT ,ψ (p) PT ] = [ θ (p−1) PT ,ψ (p−1) PT ] − ηPT ∂LPT ∂[θPT,ψPT] ∣∣∣∣ θ (p−1) PT ,ψ (p−1) PT 6: end for 7: Initialize FT encoder with PT solution: θ(0)FT = copy(θ (P ) PT ). 8: Approximate θ∗FT,ψ ∗ FT using Eq. 4. 9: Compute g1 = ∂LFT ∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT 10: Compute VJP g2 = g1 ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT using the unrolled learning step from line 8. 11: Approximate VJP ∂g∂φ ∣∣∣ φ(n−1) = g2 ∂AlgPT ∂φ ∣∣∣ φ(n−1) using the IFT (Eq. 3). 12: φ(n) = φ(n−1) − ηV ∂g∂φ ∣∣∣ φ(n−1) 13: Update PT initialization by setting: θ(init)PT = θ (P ) PT and ψ (init) PT = ψ (P ) PT . 14: end for 3.2 Our Algorithm and Practical Considerations By leveraging the above approximations, we obtain Algorithm 1 to optimize meta-parameters φ online during PT & FT of the base model. Note that AlgPT is explicitly written out as a sequence of gradient updates (lines 4-6 in Algorithm 1). We now discuss practical considerations when using this algorithm, with further details given in Appendix C. (1) Access to DFT and generalizing to new FT tasks: Solving the meta-PT problem requires availability of: the model f•, the PT data DPT, and the FT data DFT. In this work, we assume availability of the model and PT dataset, but since assuming access to the complete FT dataset at meta-PT time is more restrictive, we study two scenarios: Full FT Access, where all FT data that we expect to encounter is available at meta-PT time, and Partial FT Access, where the FT data available at meta-PT time is only a sample from a distribution of FT data that we may encounter later. Full FT Access occurs in settings like semi-supervised learning, where we are given a large unlabelled PT dataset and a small labelled FT dataset and our goal is to achieve the best possible performance by leveraging these two fixed datasets [68, 73, 25, 24, 8, 9]. Partial FT Access occurs when our goal is to learn transferable representations: at meta-PT time, we might have limited knowledge of FT tasks or data. In evaluating this scenario, we examine generalizability to new FT tasks, given only small amounts of FT data/task availability at meta-PT time, demonstrating that even very limited FT access can be sufficient for effective meta-parameter optimization [11, 45, 56, 28]. (2) DFT splits: In practice, we have access to finite datasets and use minibatches, rather than true datagenerating processes. Following standard convention, we splitDFT into two subsets for meta-learning: D(tr)FT and D (val) FT (independent of any held-out DFT testing split), and define the FT data available at meta-PT time as D(Meta)FT = D (tr) FT ∪ D (val) FT . We use D (tr) FT for the computation of ∂AlgFT ∂θPT ∣∣∣ θ (P ) PT ,ψ (0) FT and ∂AlgPT ∂φ ∣∣∣ φ(n−1) and D(val)FT for the computation of ∂LFT∂[θFT, ψFT] ∣∣∣∣ θ∗FT,ψ ∗ FT in Algorithm 1. (3) Online updates: Given that PT phases often involve long optimization horizons, for computational efficiency, we update θPT andψPT online rather than re-initializing them at every meta-iteration (see Algorithm 1). FT phases are often shorter so we could in theory re-initialize ψFT at each meta-iteration, as is presented in Algorithm 1. However, it is more computationally efficient to also optimize this online, and we follow this approach in our experiments. A description of the algorithm with these details in Appendix C. Note that prior work [67] has suggested that online optimization of certain hyperparameters (e.g., learning rates) using short horizons may yield suboptimal solutions. We comment on this in Appendix C, study this effect for our algorithm in synthetic experiments in Appendix E, and in real-world experiments on self-supervised learning in Appendix G, revealing it is not a significant concern. (4) Computational tractability: Our method can scale to large encoder models and highdimensional meta-parameters, despite the complexity of the two-stage PT & FT process. This is because: (i) meta-parameters are optimized jointly with the base model parameters; (ii) using the IFT to obtain gradients has similar time and memory complexity to one iteration of training [42]; (iii) the FT best response Jacobian can be approximated efficiently using a small number of unrolled optimization steps K, and by only unrolling the FT head of the network. In our real-world experiments (Sections 5 and 6), meta-parameterized PT has less than twice the time cost of standard PT. Further details on time and memory cost are provided in Appendices F and G. (5) Setting optimizer parameters: Learning rates and momentum values can impact the efficacy of the algorithm. A discussion on how to set them in practice is provided in Appendix D. 4 Synthetic Experiments We validate that our algorithm recovers optimal low and high dimensional meta-parameters in two synthetic MNIST experiments with Full FT Access. Further details and results are provided in Appendix E, including a study of how our method performs comparably to differentiating exactly through the entire learning process of PT & FT, without approximations. First, we optimize low dimensional meta-parameters characterizing a data augmentation scheme. We tune a 1-D meta-parameter φ representing the mean of a Normal distribution N (φ, 12) from which we sample rotation augmentations to apply to PT images. FT images undergo rotations from a Normal distribution N (µFT, 12) with µFT = 90◦; we therefore expect that φ should converge to near µFT. Using Algorithm 1 to optimize φ we find that the mean error in the optimized meta-parameter over 10 different initializations is small: 7.2± 1.5◦, indicating efficacy of the algorithm. Next, we consider learning high dimensional meta-parameters that characterize a PT per-example weighting scheme. The PT dataset contains some examples that have noisy labels, and FT examples all have clean labels. The meta-parameters are the parameters of a neural network that assigns importance weights to each PT example, which is used to weight the loss on that example during PT. We use Algorithm 1 again to optimize φ, over 10 random initializations, finding the ratio of assigned importance weights between clean label PT examples and noisy label PT examples is greater than 102. This is expected since the noisy label classes may worsen the quality of the PT model and so should be down-weighted. 5 Meta-Parameterized Multitask Pre-Training for Graph Neural Networks We consider optimizing PT task weights for a multitask PT & FT problem of predicting the presence of protein functions (multitask binary classification) given graph-structured biological data as input. We have two experimental goals: first, in the Full FT Access setting, where methods are given access to all FT data at PT time, we evaluate whether optimizing task weighting meta-parameters can improve predictive performance on the FT tasks. Second, motivated by how in typical transfer learning problems, new tasks or labels not available at PT time may become available at FT time, we study the Partial FT Access setting, investigating how our method performs when it only sees limited FT tasks at PT time. In both settings, our method outperforms baselines. 5.1 Problem Setup Dataset and Task. We consider the transfer learning benchmark introduced in [28], where the prediction problem at both PT and FT is multitask binary classification: predicting the presence/absence of specific protein functions (y) given a Protein-Protein Interaction (PPI) network as input (rep- resented as a graph x). The PT dataset has pairs DPT = {(xi, yi)}|DPT|i=1 , where y ∈ {0, 1}5000 characterizes the presence/absence of 5000 particular protein functions. The FT dataset has pairs DFT = {(xi, yi)}|DFT|i=1 , where y ∈ {0, 1}40 now characterizes the presence/absence of 40 different protein functions. Further dataset details in Appendix F. Meta-Parameterized Multitask PT. To define a meta-parameterized PT scheme, we let metaparameters φ ∈ R5000 be weights for the binary PT tasks. Then, we define a PT loss incorporating the weights: LPT = 15000 ∑5000 i=1 2 σ(φi) LCE(fPT(x;θPT,ψPT)i, yi),with i indexing the tasks, σ(·) representing the sigmoid function (to ensure non-negativity and clamp the range of the weights), and LCE denoting the binary cross-entropy loss. With this loss defined, we use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [28] for computing the relevant gradient terms. Baselines. Motivated by our goals, we compare with the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • Graph Supervised PT: As explored in prior work on this domain [28], perform multitask super- vised PT with DPT. This corresponds to setting all task weights to 1: φi = 1, i = 1, . . . , 5000. • CoTrain: A common baseline that makes use of the FT data available during PT [70] (like meta- parameterized PT). We PT a model with 5000+40 outputs (covering the space of PT and FT labels) jointly on both DPT and DFT. We do so by alternating gradient updates on batches sampled from each dataset in turn. Further details are in Appendix F. • CoTrain + PCGrad: An extension of CoTrain, where we leverage the method PCGrad [72] to perform gradient projection and prevent destructive gradient interference between updates from DPT and DFT. Further details and variants we tried are in Appendix F. Experimental Details. We use a standardized setup to facilitate comparisons. Following [28], all methods use the Graph Isomorphism Network architecture [69], undergo PT for 100 epochs, and FT for 50 epochs, over 5 random seeds, using early stopping based on validation set performance. During FT, we initialize a new FT network head and either FT the whole network or freeze the PT feature extractor and learn the FT head alone (Linear Evaluation [50]). We report results for the strategy that performed best (full results in the appendix). We consider two experimental scenarios: (1) Full FT Access: Provide methods full access to DPT and DFT at PT time (D(Meta)FT = DFT) and evaluate on the full set of 40 FT tasks; (2) Partial FT Access: Limit the number of FT tasks seen at PT time, by letting D(Meta)FT include only 30 of the 40 FT tasks. At FT time, models are fine-tuned on the held-out 10 tasks not in D(Meta)FT . We use a 4-fold approach where we leave out 10 of the 40 FT tasks in turn, and examine performance across these 10 held-out tasks, over the folds. 5.2 Results Key Findings. By optimizing PT task weights, meta-parameterized multitask PT improves performance on the FT problem of predicting presence/absence of protein functions given a protein-protein interaction graph as input. Performance improvements are also seen when generalizing to new FT tasks (protein functions), unseen at meta-PT time. Table 1 presents quantitative results for the two experimental settings described. For the No PT and Graph Supervised PT baselines, we re-implement the methods from [28], obtaining improved results (full comparison in Appendix Table 5). In both full and partial FT access settings, meta-parameterized PT improves significantly on other methods, indicating that optimizing meta-parameters can improve predictive performance generally, and be effective even when new, related tasks are considered at evaluation time. Interestingly, we observe that CoTrain and CoTrain + PCGrad obtain relatively poor performance compared to other baselines; this could be because the methods overfit to the FT data during PT. Further analysis of this is presented in Appendix F. Further experiments. In Appendix F, we study another partial FT access scenario with smaller D(Meta)FT , setting ∣∣∣D(Meta)FT ∣∣∣ = 0.5 |DFT|, and find that meta-parameterized PT again outperforms other methods. (Table 7). We also examine another meta-parameter learning baseline, namely a version of CoTrain where we optimize task weights using a traditional hyperparameter optimization algorithm [42] jointly with the main model. We find that our method outperforms this baseline also (Table 5). Method AUC (D(Meta)FT = DFT) AUC (D (Meta) FT excludes tasks) No PT 66.6 ± 0.7 65.8 ± 2.5 Graph Supervised PT 74.7 ± 0.1 74.8 ± 1.8 CoTrain 70.2 ± 0.3 69.3 ± 1.8 CoTrain + PCGrad 69.4 ± 0.2 68.1 ± 2.3 Meta-Parameterized PT 78.6 ± 0.1 77.0 ± 1.3 Table (1) Meta-Parameterized PT improves predictive performance over baselines. Table showing mean AUC and standard error for two evaluation settings. When provided all FT data at PT time (first results column), meta-parameterized PT significantly improves predictive performance. In a more challenging setting when D(Meta)FT excludes FT tasks (10 of the 40 available tasks are held-out), evaluating mean AUC/standard error across four folds with each set of 10 FT tasks held out in turn, meta-parameterized PT again obtains the best performance: it is effective even with partial information about the downstream FT tasks. Analysis of learned structures. In Appendix F, we conduct further analysis and study the effect of various PT strategies on the pre-trained representations (Figure 3), finding intuitive patterns of similarity between different methods. We also examine the learned task weights (Figure 4), and examine performance on a per-FT task basis with/without meta-parameterized PT (Figure 5), finding little evidence of negative transfer. 6 Meta-Parameterized SimCLR for Semi-Supervised Learning with ECGs We now explore a second real-world application of our method: optimizing a data augmentation policy for self-supervised PT with SimCLR [8, 9] on electrocardiograms (ECGs). SimCLR is a popular self-supervised PT method that leverages data augmentations to define a contrastive PT objective (details in Appendix G.1). The choice/strength of the augmentations used significantly impacts the effectiveness of the algorithm [8]. In settings where relevant augmentations are known (e.g., natural images), SimCLR is readily applicable; however, for ECGs, effective augmentations are less clear, motivating the use of our algorithm to optimize the augmentation pipeline. We have two experimental goals. Firstly, we examine the typical semi-supervised learning setting of Full FT Access: we explore whether optimizing the augmentations in SimCLR PT can improve performance on the supervised FT task of detecting pathologies from ECGs, given access to all FT data at meta-PT time. Secondly, to study the data efficiency of our method, we consider the Partial FT Access setting and explore performance given access to limited FT data at meta-PT time. We find that our method improves the performance of SimCLR, and that it is effective even with very limited amounts of FT data provided at meta-PT time. 6.1 Problem Setup Dataset and Task. We construct a semi-supervised learning (SSL) problem using PTB-XL [64, 20], an open-source dataset of electrocardiogram (ECG) data. Let the model input at both PT and FT time be denoted by x, which represents a 12-lead (or channel) ECG sampled at 100 Hz for 10 seconds resulting in a 1000 × 12 signal. Our goal is to pre-train a model fPT on an unlabeled PT dataset of ECGs DPT = {xi}|DPT|i=1 using SimCLR PT [8], and then fine-tune it on the labeled FT dataset DFT = {(xi, yi)}|DFT|i=1 , where the FT labels y ∈ {0, 1}5 encode whether the signal contains certain features indicative of particular diseases/pathologies. Further dataset details in Appendix G. ECG Data Augmentations. To augment each ECG for SimCLR (example in Appendix G, Figure 6), we apply three transformations in turn (based on prior work in time series augmentation [30, 66]): 1. Random cropping: A randomly selected portion of the signal is zeroed out. 2. Random jittering: IID Gaussian noise is added to the signal. 3. Random temporal warping: The signal is warped with a random, diffeomorphic temporal transformation. This is formed by sampling from a zero mean, fixed variance Gaussian at each temporal location in the signal to obtain a velocity field, and then integrating and smoothing (following [4, 5]) to generate a temporal displacement field, which is applied to the signal. Test AUC at different FT dataset sizes |DFT| FT dataset size |DFT| 100 250 500 1000 2500 No PT 71.5 ± 0.7 76.1 ± 0.3 78.7 ± 0.3 82.0 ± 0.2 84.5 ± 0.2 SimCLR 74.6 ± 0.4 76.5 ± 0.3 79.8 ± 0.3 82.2 ± 0.3 85.8 ± 0.1 Meta-Parameterized SimCLR 76.1 ± 0.5 77.8 ± 0.4 81.7 ± 0.2 84.0 ± 0.3 86.7 ± 0.1 Table (2) Meta-Parameterized SimCLR obtains improved semi-supervised learning performance. Table showing mean AUC/standard error over seeds across 5 FT binary classification tasks for baselines and meta-parameterized SimCLR at different sizes of DFT, with D(Meta)FT = DFT. We observe improvements in performance with meta-parameterized SimCLR, which optimizes the augmentation pipeline. Meta-Parameterized SimCLR. To construct a meta-parameterized SimCLR PT scheme, we instantiate meta-parameters φ as the weights of a neural network w(x;φ) that takes in an input signal and outputs the warp strength: the variance of the Gaussian that is used to obtain the velocity field for temporal warping. This parameterization permits signals to be warped more/less aggressively depending on their individual structure. With this definition, the SimCLR PT loss is directly a function of the meta-parameters, and we can use Algorithm 1 (with P = 10 PT steps and K = 1 truncated FT steps) to jointly learn φ and the feature extractor parameters θPT. For computational efficiency, we only update the FT head when computing the FT best response Jacobian and keep the feature extractor of the model fixed. We use the training and validation splits of the FT dataset DFT proposed by the dataset creators [64] for computing the relevant gradient terms. Baselines. Our experimental goals suggest the following PT baselines: • No PT: Do not perform PT (i.e., feature extractor parameters are randomly initialized). • SimCLR: Pre-train a model using SimCLR with the above three augmentations without learning per-example temporal warping strengths. Experimental Details. We standardize the experimental setup to facilitate comparisons. All methods use a 1D CNN based on a ResNet-18 [23] architecture. The temporal warping network w(x;φ) is a four layer 1D CNN. SimCLR PT takes place for 50 epochs for all methods, over three PT seeds. At evaluation time, for all methods, we initialize a new FT network head over the PT network feature extractor and FT the whole network for 200 epochs, over five FT seeds. Validation set AUC is used for early stopping. We consider two experimental settings: (1) Full FT Access, standard SSL: consider different sizes of the labelled FT dataset DFT and make all the FT data available at meta-PT time, D(Meta)FT = DFT; and (2) Partial FT Access, examining data efficiency of our algorithm: SSL when only limited FT data is available at meta-PT time: D(Meta)FT ⊆ DFT. We evaluate performance across the 5 binary classification tasks in both settings. Further details are provided in Appendix G. 6.2 Results Key Findings. By optimizing the data augmentation policy used in SimCLR PT, meta-parameterized SimCLR improves performance on the FT problem of detecting pathologies from ECG data. Even a small amount of FT data provided at meta-PT time can lead to improved FT performance. Table 2 shows results for the Full FT Access setting, D(Meta)FT = DFT: mean AUC/standard error over seeds across the 5 FT binary classification tasks at different sizes of DFT. We observe that meta-parameterized SimCLR improves on other baselines in all settings. Note that while these gains are modest, they are obtained with simple augmentation policies; our method may yield further improvements if applied to policies with more scope to specialize the augmentations. Next, we consider the Partial FT Access scenario where D(Meta)FT ⊆ DFT, which is relevant when we only have a small amount of FT data at meta-PT time. Fixing |DFT| = 500, we find that with |D(Meta)FT | as small as 50, we obtain test AUC of 81.3 ± 0.5, compared to 79.8 ± 0.3 with no optimization of augmentations: this shows that even small |D(Meta)FT | appear to be sufficient for meta-parameter learning. Further results showing performance curves varying |D(Meta)FT | are in Appendix G. Further experiments. In Appendix G, we study other aspects of our method on this domain, including: (1) Exploring different values of K, the number of FT steps differentiated through when obtaining meta-parameter gradients; and (2) Examining a meta-parameter learning baseline where augmentations are optimized for supervised learning, using the method in [42], and then applied to semi-supervised learning (to compare how optimizing augmentations for supervised learning compares to optimizing them for semi-supervised learning). We find that our method is not very sensitive to the value of K (provided K > 0), and that it outperforms this additional baseline. 7 Related Work Gradient-based hyperparameter optimization (HO): Gradient-based HO roughly falls into two camps. The simpler and less scalable approach differentiates through training [12, 44]. The other approach assumes that optimization reaches a fixed point, and approximates the best-response Jacobian [7, 41, 43, 42]. Neither of these approaches can be straightforwardly applied to scalably differentiate through two stages of optimization (PT & FT). Direct differentiation through both stages would be too memory-intensive. Approximating the best-response Jacobian using the IFT as in [42] twice is feasible, but requires changing the FT objective to include a proximal term [55], and tuning two sets of interacting approximations. Instead, we compose a constant-memory IFT approximation for the lengthy PT stage with an exact backprop-through-training for the shorter FT stage. Applications of Nested Optimization: Many prior works frame learning as nested optimization, including few-shot learning [16, 1, 17, 55, 21, 58, 53, 75, 31, 38], neural network teaching [14, 15, 62, 54], learning data augmentation and reweighting strategies [32, 22, 57, 60, 29], and auxiliary task learning [49, 51, 39]. The majority of this work studies nested optimization in the standard one-stage supervised learning paradigm, unlike our setting: the two-stage PT & FT problem. The most closely related works to ours are [70], where PT task weights are learned for a multitask PT problem using electronic health record data, and [71], where a masking policy is learned for masked language modelling PT. In contrast to our work, which introduces the more general framing of meta-parameter optimization, [70] and [71] are focused only on specific instantiations of meta-parameters as task weights and masking policies. The learning algorithms in these works either: differentiate directly through truncated PT & FT [71] (which may not be scalable to longer PT/large encoder models), or leverage extensive first-order approximations [70], unlike our more generally applicable approach. 8 Scope and Limitations Our gradient-based algorithm applies in situations where we want to optimize (potentially highdimensional) PT hyperparameters, or meta-parameters, and have access to a model, PT data, and FT data. We demonstrated that even limited FT data availability can be sufficient to guide metaparameter learning; however, our method would not apply when no FT data at all is available at meta-PT time, or if the model or PT data were not available. Our algorithm requires meta-parameters to be differentiable, and cannot directly be used to optimize meta-parameters that do not affect the PT optimization landscape (e.g., PT learning rates). 9 Conclusion In this work, we studied the problem of optimizing high-dimensional pre-training (PT) hyperparameters, or meta-parameters. We formalized Meta-Parameterized Pre-Training, a variant of standard PT incorporating these meta-parameters, and proposed a gradient-based algorithm to efficiently learn meta-parameters by approximately differentiating through the two-stage PT & FT learning process. In experiments, we used our algorithm to improve predictive performance on two real-world PT tasks: multitask PT with graph structured data [28], and self-supervised contrastive PT on electrocardiogram signals using SimCLR [8]. Future work could apply our method to learn other potential instantiations of meta-parameters, such as learned auxiliary tasks and noise models. Societal Impact. Our contribution in this work is methodological, namely a new algorithm to optimize high-dimensional pre-training hyperparameters. We do not expect there to be direct negative societal impacts of this contribution. However, to evaluate our method, we considered an experimental domain using healthcare data. Given the high risk nature of this domain, before use in real-world settings, the method should be validated in retrospective and prospective studies. This is to detect any failure modes and identify potential harm that may come from deploying it. Acknowledgements This work was supported in part by funds from Quanta Computer, Inc. The authors thank the members of the Clinical and Applied Machine Learning group at MIT and Paul Vicol for helpful feedback.
1. What is the focus and contribution of the paper regarding meta-learning techniques? 2. What are the strengths of the proposed approach, particularly in its novelty and improvements? 3. What are the weaknesses of the paper, especially regarding the complexity and applicability of the approach? 4. Do you have any concerns about the use of meta-parameters and their potential impact on the performance of the model? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper introduces a gradient-based algorithm which improve the first stage of two-stage pre-trained and fine-tune process with meta-learning techniques. The major contribution is proposing to use additional parameters called meta-parameters to augment pre-trained stage and use a novel gradient-based algorithm to learn meta-parameters through the two-stage process. The experimental results demonstrate the significant improvement of the proposed approach over the standard two-stage PT and FT process. Review == Pros == Using meta-parameters to improve pre-trained model has enough novelty, and further research on this open problem is meaningful. The authors propose a novel gradient-based algorithm to optimize meta-parameters. The proposed approach achieves significant improvements on both tasks. == Cons == Meta-parameters as neural networks could have much more representative power than weight of task or temporal transformation, I'd like to see more complex scenarios where this approach can be applied. Despite the experiment setting in this paper may be new, three still need some other meta-learning techniques as baseline if possible. Because as far as I know, there are many gradient-based meta-learning algorithms that are not used in PT and FT process for now. Meta-learning techniques usually requires much more computing resources and time, an analysis on GPU memory and execution speed would be appreciated. Some symbols may be confused for me, such as line 4 in Algorithm 1.
NIPS
Title PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation Abstract Can Qin∗, Haoxuan You∗, Lichen Wang, C.-C. Jay Kuo, Yun Fu Department of Electrical & Computer Engineering, Northeastern University Department of Computer Science, Columbia University Department of Electrical and Computer Engineering, University of Southern California Khoury College of Computer Science, Northeastern University [email protected], [email protected], [email protected], [email protected], [email protected] 1 Introduction 3D vision has achieved promising outcomes in wide-ranging real-world applications (i.e., autonomous cars, robots, and surveillance system). Enormous amounts of 3D point cloud data is captured by depth cameras or LiDAR sensors nowadays. Sophisticated 3D vision and machine learning algorithms are required to analyze its content for further exploitation. Recently, the advent of Deep Neural Network (DNN) has greatly boosted the performance of 3D vision understanding including tasks of classification, detection, and segmentation[22, 9, 37, 41]. Despite its impressive success, DNN requires massive amounts of labeled data for training which is time-consuming and expensive to collect. This issue significantly limits its promotion in the real world. Domain adaptation (DA) solves this problem by building a model utilizing the knowledge of label-rich dataset, i.e., source domain, which generalizes well on the label-scarce dataset, i.e., target domain. 1The PointDA-10 data and official code are uploaded on https://github.com/canqin001/PointDAN ∗Equal Contribution. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Lichen-Figure1 However, due to the shifts of distribution across different domains/datasets, a model trained on one domain usually performs poorly on other domains. Most DA methods address this problem by either mapping original features into a shared subspace or minimizing instance-level distances, such as MMD, CORAL etc., to mix cross-domain features [2, 18, 31]. Currently, inspired by Generative Adversarial Network (GAN) [12], adversarial-training DA methods, like DANN, ADDA, MCD etc., have achieved promising performance in DA and drawn increasing attentions [10, 32, 26]. They deploy a zero-sum game between a discriminator and a generator to learn domain-invariant representations. However, most of the existing DA approaches mainly target on 2D vision tasks, which globally align the distribution shifts between different domains. While for 3D point cloud data, the geometric structures in 3D space can be detailedly described, and different local structures also have clear semantic meaning, such as legs for chairs, which in return combine to form the global semantics for a whole object. As shown in Fig. 1, two 3D objects might be weak to align in global, but would have similar 3D local structures, which are easier to be aligned. So it is urgently desired for a domain adaptation framework to focus on local geometric structures in 3D DA scenario. To this end, this paper introduces a novel point-based Unsupervised Domain Adaptation Network (PointDAN) to achieve unsupervised domain adaptation (UDA) for 3D point cloud data. The key to our approach is to jointly align the multi-scale, i.e., global and local, features of point cloud data in an end-to-end manner. Specifically, the Self-Adaptive (SA) nodes associated with an adjusted receptive field are proposed to dynamically gather and align local features across domains. Moreover, a node attention module is further designed to explore and interpret the relationships between nodes and their contributions in alignment. Meanwhile, an adversarial-training strategy is deployed to globally align the global features. Since there are few benchmarks for DA on 3D data ( i.e., point cloud) before, we build a new benchmark named PointDA-10 dataset for 3D vision DA. It is generated by selecting the samples in 10 overlapped categories among three popular datasets (i.e., ModelNet [35], ShapeNet [3] and ScanNet [5]). In all, the contributions of our paper could be summarized in three folds: • We introduce a novel 3D-point-based unsupervised domain adaptation method by locally and globally align the 3D objects’ distributions across different domains. • For local feature alignment, we propose the Self-Adaptive (SA) nodes with a node attention to utilize local geometric information and dynamically gather regional structures for aligning local distribution across different domains. • We collect a new 3D point cloud DA benchmark, named PointDA-10 dataset, for fair evaluation of 3D DA methods. Extensive experiments on PointDA-10 demonstrate the superiority of our model over the state-of-the-art general-purpose DA methods. 2 Related Works 2.1 3D Vision Understanding Different from 2D vision, 3D vision has various data representation modalities: multi-view, voxel grid, 3D mesh and point cloud data. Deep networks have been employed to deal with the above different formats of 3D data [29, 19, 36, 8]. Among the above modalities, point cloud, represented by a set of points with 3D coordinates {x, y, z}, is the most straightforward representation to preserve 3D spatial information. Point cloud can be directly obtained by LiDAR sensors, which brings a lot of 3D environment understanding applications from scene segmentation to automatic driving. PointNet [22] is the first deep neural networks to directly deal with point clouds, which proposes a symmetry function and a spatial transform network to obtain the invariance to point permutation. However, Lichen-Figure2 local geometric information is vital for describing object in 3D space, which is ignored by PointNet. So recent work mainly focuses on how to effectively utilize local feature. For instance, in PointNet++ [23], a series of PointNet structures are applied to local point sets with varied sizes and local features are gathered in a hierarchical way. PointCNN [17] proposes χ-Conv to aggregate features in local pitches and applies a bottom-up network structure like typical CNNs. In 3D object detection tasks, [41] proposes to divide a large scene into many voxels, where features of inside points are extracted respectively and a 3D Region Proposal Network (RPN) structure is followed to obtain detection prediction. In spite of the broad usage, point cloud data has significant drawbacks in labeling efficiency. During labeling, people need to rotate several times and look through different angles to identify an object. In real-world environment where point cloud data are scanned from LiDAR, it also happens that some parts are lost or occluded (e.g.tables lose legs), which makes efficient labeling more difficult. Under this circumstance, a specific 3D point-based unsupervised domain adaptation method designed to mitigate the domain gap of source labeled data and target unlabeled data is extremely desired. 2.2 Unsupervised Domain Adaptation (UDA) The main challenge of UDA is that distribution shift (i.e., domain gap) exists between the target and source domain. It violates the basic assumption of conventional machine learning algorithms that training samples and test samples sharing the same distribution. To bridge the domain gap, UDA approaches match either the marginal distributions [30, 21, 11, 33] or the conditional distributions [38, 4] between domains via feature alignment. It addresses this problem by learning a mapping function f which projects the raw image features into a shared feature space across domains. Most of them attempt to maximizing the inter-class discrepancy, while minimize the intra-class distance in a subspace simultaneously. Various methods, such as Correlation Alignment (CORAL) [31], Maximum Mean Discrepancy (MMD) [2, 18], or Geodesic distance [13] have been proposed. Apart from the methods aforementioned, many DNN-based domain adaptation methods have been proposed due to their great capacity in representation learning [14, 28, 16]. The key to these methods is to apply DNN to learn domain-invariant features through an end-to-end training scenario. Another kind of approach utilizes adversarial training strategy to obtain the domain invariant representations [10, 32, 7, 24]. It includes a discriminator and a generator where the generator aims to fool the discriminator until the discriminator is unable to distinguish the generated features between the two domains. Such approaches include Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], Maximum Classifier Discrepancy (MCD) [26] etc. Most of UDA methods are designed for 2D vision tasks and focus on the alignment of global image features across different domains. While in 3D data analytical tasks, regional and local geometry information is crucial for achieving good learning performance. Zhou et al. [40] firstly introduced UDA on the task of 3D keypoint estimation relying on the regularization of multi-view consistency term. However, this method cannot be extended to more generalized tasks, i.e., classification. In [27, 34], point cloud data are first projected into 2D images (bird-eye view or front view), and 2D DA methods are applied, which would lose essential 3D geometric information. To this end, we propose a generalized 3D point-based UDA framework. It well preserves the local structures and explores the global correlations of all local features. Adversarial training strategies are further employed to locally and globally align the distribution shifts across the source and target domains. 3 Proposed Model 3.1 Problem Definition and Notation In 3D point-based UDA, we have the access to labeled source domain S = {(xsi , ysi )} ns i=1 where ysi ∈ Y = {1, ..., Y } with ns annotated pairs and target domain T = {xtj} nt j=1 of nt unlabeled data points. The inputs are point cloud data usually represented by 3-dimensional coordinates (x, y, z) where xsi , xtj ∈ X ⊂ RT×3, where T is the number of sampling points of one 3D object, with the same label space Ys = Yt. It is further assumed that two domains are sampled from the distributions Ps(xsi , ysi ) and Pt(xti, yti) respectively while the i.i.d. assumption is violated due to the distribution shift Ps 6= Pt. The key to UDA is to learn a mapping function Φ : X → Rd that projects raw inputs into a shared feature spaceH spreadable for cross-domain samples. 3.2 Local Feature Alignment The local geometric information plays an important role in describing point cloud objects as well as domain alignment. As illustrated in Fig. 1, given the same “table” class, the one from ScanNet misses parts of legs due to the obstacles through LiDAR scanning. The key to align these two “tables” is to extract and match the features of similar structures, i.e., plains, while ignoring the different parts. To utilize the local geometric information, we propose to adaptively select and update key nodes for better fitting the local alignment. Self-Adaptive Node Construction: Here we give the definition of node in point cloud. For each point cloud, we represent its n local geometric structures as n point sets {Sc|Sc = {x̂c, xc1, ..., xck}, x ⊆ R3}nc=1, where the c-th region Sc contains a node x̂c and its surrounding k nearest neighbor points {xc1, ..., xck}. The location of a node decides where the local region is and what points are included. To achieve local features, directly employing the farthest point sampling or random sampling to get the center node is commonly used in previous work [23, 17]. These methods guarantee full coverage over the whole point cloud. However, for domain alignment, it is essential to make sure that these nodes cover the structures of common characteristics in 3D geometric space and drop the parts unique to certain objects. In this way, the local regions sharing similar structures are more proper to be aligned, while the uncommon parts would bring a negative transfer influence. Inspired by deformable convolution in 2D vision [6], we propose a novel geometric-guided shift learning module, which makes the input nodes self-adaptive in receptive field for network. Different from Deformable Convolution where semantic features are used for predicting offset, we utilize the local edge vector as a guidance during learning. As show in Fig. 2, our module transforms semantic information of each edge into its weight and then we aggregate the weighted edge vectors together to obtain our predicted offset direction. Intuitively, the prediction shift is decided by the voting of surrounding edges with different significance. We first initialize the location of node by the farthest point sampling over the point cloud to get n nodes, and their k nearest neighbor points are collected together to form n regions. For the c-th node, its offset is computed as: ∆x̂c = 1 k k∑ j=1 (RT (vcj − v̂c) · (xcj − x̂c)), (1) where x̂ and xcj denote location of node and its neighbor point, so xcj − x̂c means the edge direction. vcj and v̂c are their mid-level point feature extracted from the encoder v = E(x|ΘE) and RT is the weight from one convolution layer for transforming feature. We apply the bottom 3 feature extraction layers of PointNet as the encoder E. ∆x̂c is the predicted location offset of the c-th node. After obtaining learned shift ∆x̂c, we achieve the self-adaptive update of nodes and their regions by adding shift back to node x̂c and finding their k nearest neighbor points: x̂c = x̂c + ∆x̂c, (2) {xc1, ..., xck} = kNN(x̂c|xj , j = 0, ...,M − 1). (3) Then the final node features v̂c is computed by gathering all the point features inside their regions: v̂c = max j=1,..,k RG(vcj). (4) whereRG is the weight of one convolution layer for gathering point features in whichRG ⋃ RT = R, and the output node features are employed for local alignment. For better engaging SA node features, we also interpolate them back into each point following the interpolation strategy in [23] and fuse them with the original point features from a skip connection. The fused feature is input into next-stage generator for higher-level processing. SA Node Attention: Even achieving SA nodes, it is unreasonable to assume that every SA node contributes equally to the goal of domain alignment. The attention module, which is designed to model the relationship between nodes, is necessary for weighting the contributions of different SA nodes for domain alignment and capturing the features in larger spatial scales. Inspired by the channel attention [39], we apply a node attention network to model the contribution of each SA nodes for alignment by introducing a bottleneck network with a residual structure [14]: hc = ϕ(WUδ(WDzc)) · v̂c + v̂c, (5) where zc = E(v̂c(k)) indicates the mean of the c-th node feature. δ(·) and ϕ(·) represent the ReLU function [20] and Sigmoid function respectively. WD is the weight set of a convolutional layer with 1× 1 kernels, which reduces the number of channels with the ratio r. The channel-upscaling layer WU , where WU ⋃ WD =W , increases the channels to its original number with the ratio r. SA Node Feature Alignment: The optimization of both offsets and network parameters for local alignment are sensitive to the disturbance of gradients, which makes GAN-based methods perform unstable. Therefore, we minimize the MMD [2, 18] loss to align cross-domain SA node features as: Lmmd = 1 nsns ns∑ i,j=1 κ(hsi ,h s j) + 1 nsnt ns,nt∑ i,j=1 κ(hsi ,h t j) + 1 ntnt nt∑ i,j=1 κ(hti,h t j), (6) where κ is a kernel function and we apply the Radial Basis Function (RBF) kernel in our model. 3.3 Global Feature Alignment After having the features fi ∈ Rd corresponding to the i-th sample by a generator network, the global feature alignment attempts to minimize the distance between features across different domains. In difference of local feature alignment, global feature alignment process is more stable due to the invariance of receptive field of inputs, which provides more options for choosing GAN-based methods. In this paper, we apply Maximum Classifier Discrepancy (MCD) [26] for global feature alignment due to its outstanding performance in general-purpose domain alignment. The encoder E designed for SA node feature extraction is also applied for extracting raw point cloud features: h̃i = E (xi|ΘE) over the whole object. And the point features are concatenated with interpolated SA-node features as ĥi = [hi, h̃i] to capture the geometry information in multi-scale. Then, we feed the ĥi to the generator network G which is the final convolution layer (i.e., conv4) of PointNet attached with a global max-pooling to achieve high-level global feature fi = max− pooling(G(ĥi|ΘG)), where fi ∈ Rd represents the global feature of the i-th sample. And d is usually assigned as 1,024. The global alignment module attempts to align domains with two classifier networks F1 and F2 to keep the discriminative features given the support of source domain decision boundaries. The two classifiers F1 and F2 take the features fi and classify them into K classes as pj(yi|xi) = Fj ( fi|ΘjF ) , j = 1, 2, where pj(yi|xi) is the K-dimensional probabilistic softmax results of classifiers. To train the model, the total loss is composed of two parts: the task loss and discrepancy loss. Similar as most UDA methods, the object of task loss is to minimize the empirical risk on source domain {Xs, Ys}, which is formulated as follows: Lcls(Xs, Ys) = −E(xs,ys)∼(Xs,Ys) K∑ k=1 1[k=ys]log(p((y = ys)|G(E(xs|ΘE)|ΘG))). (7) The discrepancy loss is calculated as the l1 distance between the softmax scores of two classifiers: Ldis(xt) = Ext∼Xt [|p1(y|xt)− p2(y|xt)|]. (8) 3.4 Training Procedure We apply the Back-Propagation [25] to optimize the whole framework under the end-to-end training scenario. The training process is composed of two steps in total: Step1. Firstly, it is required to train two classifiers F1 and F2 with the discrepancy loss Ldis in Eq. (8) and classification loss Lcls obtained in Eq. (7). The discrepancy loss, which requires to be maximized, helps gather target features given the support of the source domain. The classification loss is applied to minimize the empirical risk on source domain. The objective function is as follows: min F1,F2 Lcls − λLdis. (9) Step2. In this step, we train the generator G, encoder E, the node attention network W and transform network R by minimizing the discrepancy loss, classification loss and MMD loss to achieve discriminative and domain-invariant features. The objective function in this step is formulated as: min G,E,W,R Lcls + λLdis + βLmmd, (10) where both λ and β are hyper-parameters which manually assigned as 1. 3.5 Theoretical Analysis In this section, we analyze our method in terms of theH∆H- distance theory [1]. TheH∆H-distance is defined as dH∆H(S, T ) = 2 sup h1,h2∈H |Px∼S [h1(x) 6= h2(x)]− Px∼T [h1(x 6= h2(x))]| , (11) which represents the discrepancy between the target and source distributions, T and S , with regard to the hypothesis classH. According to [1], the error of classifier h on the target domain T (h) can be bounded by the sum of the source domain error S(h), theH∆H- distance and a constant C which is independent of h, i.e., T (h) ≤ S(h) + 1 2 dH∆H(S, T ) + C. (12) The relationship between our method and theH∆H- distance will be discussed in the following. The H∆H- distance can also be denoted as below: dH∆H(S, T ) = 2 sup h1,h2∈H ∣∣Ex∼S1[h1(x)6=h2(x)] − Ex∼T 1[h1(x)6=h2(x)]∣∣ . (13) As the term Ex∼S1[h1(x)6=h2(x)] would be very small if h1 and h2 can classify samples over S correctly. In our case, p1 and p2 correspond to h1 and h2 respectively, which agree on their predictions on source samples S. As a result, dH∆H(S, T ) can be approximately calculated by suph1,h2∈H Ex∼T 1[h1(x)6=h2(x)], which is the supremum of Ldis in our problem. If decomposing the hypothesis h1 into G and F1, and h2 into G and F2, and fix G, we can get sup h1,h2∈H Ex∼T 1[h1(x)6=h2(x)] = sup F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (14) Further, we replace sup with max, and attempt to minimize (14) with respect to G: min G max F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (15) Problem (15) is similar to the problem (9,10) in our method. Consider the discrepancy loss Ldis, we first train classifiers F1, F2 to maximize Ldis on the target domain and next train generator G to minimize Ldis, which matches with problem (15). Although we also need consider the source loss Lcls and MMD loss Lmmd, we can see from [1] that our method still has a close connection to the H∆H- distance. Thus, by iteratively train F1, F2 and G, we can effectively reduce dH∆H(S, T ), and further lead to the better approximate T (h) by S(h). Table 1: Number of samples in proposed datasets. Dataset Bathtub Bed Bookshelf Cabinet Chair Lamp Monitor Plant Sofa Table Total M Train 106 515 572 200 889 124 465 240 680 392 4, 183Test 50 100 100 86 100 20 100 100 100 100 856 S Train 599 167 310 1, 076 4, 612 1, 620 762 158 2, 198 5, 876 17, 378Test 85 23 50 126 662 232 112 30 330 842 2, 492 S* Train 98 329 464 650 2, 578 161 210 88 495 1, 037 6, 110Test 26 85 146 149 801 41 61 25 134 301 1, 769 4 PointDA-10 Dataset Bathtub Bed Bookshelf Cabinet Chair Plant Sofa Table ModelNet-10 ShapeNet-10 ScanNet-10 ModelNet-10 ShapeNet-10 ScanNet-10 Lamp Monitor Figure 3: Samples of PointDA-10 dataset. As there is no 3D point cloud benchmark designed for domain adaptation, we propose three datasets with different characteristics, i.e., ModelNet10, ShapeNet-10, ScanNet-10, for the evaluation of point cloud DA methods. To build them, we extract the samples in 10 shared classes from ModelNet40 [35], ShapeNet [3] and ScanNet [5] respectively. The statistic and visualization are shown in Table 1 and Fig. 3. Given the access to the three subdatasets, we organize six types of adaptation scenarios which are M→ S, M→ S*, S→ M, S→ S*, S*→ M and S*→ S respectively. ModelNet-10 (M): ModelNet40 con- tains clean 3D CAD models of 40 categories. To extract overlapped classes, we regard ’nightstand’ class in ModelNet40 as ’cabinet’ class in ModelNet-10, because these two objects almost share the same structure. After getting the CAD model, we sample points on the surface as [23] to fully cover the object. ShapeNet-10 (S): ShapeNetCore contains 3D CAD models of 55 categories gathered from online repositories. ShapeNet contains more samples and its objects have larger variance in structure compared with ModelNet. We apply uniform sampling to collect the points of ShapeNet on surface, which, compared with ModelNet, may lose some marginal points. ScanNet-10 (S*): ScanNet contains scanned and reconstructed real-world indoor scenes. We isolate 10 classes instances contained in annotated bounding boxes for classification. The objects often lose some parts and get occluded by surroundings. ScanNet is a challenging but realistic domain. 5 Experiments 5.1 Experiments Setup In this section, we evaluate the proposed method under the standard protocol [11] of unsupervised domain adaptation on the task of point cloud data classification. Implementation Details: We choose the PointNet [22] as the backbone of Encoder E and Generator G and apply a two-layer multilayer perceptron (MLP) as F1 and F2. The proposed approach is implemented on PyTorch with Adam [15] as the optimizer and a NVIDIA TITAN GPU for training. The learning rate is assigned as 0.0001 under the weight decay 0.0005. All models have been trained for 200 epochs of batch size 64. We extract the SA node features from the third convolution layer (i.e., conv3) for local-level alignment and the number of SA node is assigned as 64. Baselines: We compare the proposed method with a serial of general-purpose UDA methods including: Maximum Mean Discrepancy (MMD) [18], Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], and Maximum Classifier Discrep- ancy (MCD) [26]. During these experiments, we take the same loss and the same training policy. w/o Adapt refers to the model trained only by source samples and Supervised means fully supervised method. Ablation Study Setup: To analyze the effects of each module, we introduce the ablation study which is composed of four components: global feature alignment,i.e., G, local feature alignment, i.e., L, SA node attention ,i.e., A, and the self-training [42], i.e., P, to finetune the model with 10% pseudo target labels generated from the target samples with the highest softmax scores. Evaluation: Given the labeled samples in source domain and unlabeled samples from target domain for training, all the models would be evaluated on the test set of target domain. All the experiments have been repeated three times and we then report the average top-1 classification accuracy in all tables. 5.2 Classification Results on PointDA-10 Dataset The quantitative results and comparison on PointDA-10 dataset are summarized in Table 2. The proposed methods outperform all the general-purpose baseline methods on all adaptation scenarios. Although the largest domain gap appears on M→ S* and S→ S*, ours exhibit the large improvement which demonstrates its superiority in aligning different domains. In comparison to the baseline methods, MMD, although defeated by GAN-based methods in 2D vision tasks, is only slightly inferior and even outperforms them in some domain pairs. The phenomenon could be explained as global features limit the upper bound due to its weakness in representing diversified geometry information. In addition, there still exists a great margin between supervised method and DA methods. The Table 3 represents the class-wise classification results on the domain pair M→ S. Local alignment helps boost the performance on most of the classes, especially for Monitor and Chair. However, some of the objects, i.e., sofa and bed, are quite challenging for recognition under the UDA scenario where the negative transfer happens as the performance could drop on these classes. Moreover, we observed that the imbalanced training samples do affect the performance of our model and other domain adaptation (DA) models, which makes Table 3 slightly noisy. Chair, Table, and Sofa (easily confusing with Bed) cover more than 60% samples in M-to-S scenario which causes the drop of certain classes (e.g., Bed and Sofa). 5.3 Quantitative Analysis Ablation Study: We further analyze the effect of four components proposed in our model (i.e., G, L, S, A). From the Table 2, we find that together with SA node, adding local alignment will bring significant improvement, but only local alignment with fixed node wouldn’t improve a lot. Above results substantially validate the effectiveness of our SA nodes that attributes to its self-adapt in region receptive field and significant weight. And an interesting phenomenon in Table 3 is that the full version method is defeated by G+L+A in class-wise accuracy. It means that inference of pseudo labels is easily influenced by imbalance distribution of samples in different classes where certain classes would dominate the process of self-training and cause errors accumulation. Convergence: We evaluate the convergence of proposed methods as well as baseline methods on ModelNet-to-ShapeNet in Fig. 4(d). Compared with baselines methods, local alignment helps accelerate the convergence and makes them more stable since being convergent. SA Node Feature Extraction Layer: The influence of different layers for mid-level feature extraction is analyzed in Fig. 4(c) on M→ S and S*→M. Compared with conv1 and conv2 whose features are less semantical, conv3 contains the best mid-level feature for local alignment. 5.4 Results Visualization We visualize the top contributed SA nodes for local alignment of two cross-domain objects to interpret the effectiveness of local feature alignment in Fig. 4(a)-4(b). The matched nodes are selected from the elements with the highest values from the matrix M = hsi ×htj > ∈ R64×64 obtained from Eq. 5. It is easily observed that the SA nodes representing similar geometry structure, i.e., legs, plains, contribute most to local alignment no matter they are between same objects or different objects across domains. It significantly demonstrates the common knowledge learned by SA nodes for local alignment. 6 Conclusion In this paper, we propose a novel 3D Unsupervised Domain Adaptation Network on Point Cloud Data (PointDAN). PointDAN is a specifically designed framework based on multi-scale feature alignment. For local feature alignment, we introduce Self-Adaptive (SA) nodes to represent common geometry structure across domains and apply a GAN-based method to align features globally. To evaluate the proposed model, we build a new 3D domain adaptation benchmark. In the experiments, we have demonstrated the superiority of our approach over the state-of-the-art domain adaptation methods. Acknowledgements We thank Qianqian Ma from Boston University for her helpful theoretical insights and comments for our work.
1. What is the main contribution of the paper in the field of 3D point cloud processing? 2. What are the novel structures and modules introduced by the authors in their model? 3. How does the paper provide theoretical analysis for the proposed method? 4. What are the strengths of the paper regarding its technical soundness and performance compared to other methods? 5. Do you have any concerns or suggestions regarding the presentation and discussion of the method's weaknesses and limitations?
Review
Review The authors present the first domain adaptation model for 3D point clouds. They come up with novel structures and modules to create their model. For this, they build on a variety of known and novel techniques, for example they use a PointNet++ encoder, but also introduce novel Self-Adaptive nodes and use a convolution similar to bilateral convolution (they call it deformable convolution) to extract features for domain alignment. The submission seems technically sound and the authors provide a theoretical analysis for their method in terms of the H\Delta H theory. Since I am not an expert in domain adaptation, I did not find conclusive judgement of their contribution on that end. The paper is clearly written, though I noticed a substantial amount of writing mistakes w.r.t. articles (the, a). The presented method achieves clearly better results than other methods undergoing domain transfer without adaptation. It would be interesting, though, to see the result of other methods fine-tuned with a small amount of labelled data to get an impression of the complexity of the domain transfer task between the different datasets. Also, even though there is an ablation study performed for the different proposed parts of the architecture, there is no discussion of the weaknesses of the method, which would be helpful. The approach together with the newly proposed dataset, could be a valuable contribution for the community.
NIPS
Title PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation Abstract Can Qin∗, Haoxuan You∗, Lichen Wang, C.-C. Jay Kuo, Yun Fu Department of Electrical & Computer Engineering, Northeastern University Department of Computer Science, Columbia University Department of Electrical and Computer Engineering, University of Southern California Khoury College of Computer Science, Northeastern University [email protected], [email protected], [email protected], [email protected], [email protected] 1 Introduction 3D vision has achieved promising outcomes in wide-ranging real-world applications (i.e., autonomous cars, robots, and surveillance system). Enormous amounts of 3D point cloud data is captured by depth cameras or LiDAR sensors nowadays. Sophisticated 3D vision and machine learning algorithms are required to analyze its content for further exploitation. Recently, the advent of Deep Neural Network (DNN) has greatly boosted the performance of 3D vision understanding including tasks of classification, detection, and segmentation[22, 9, 37, 41]. Despite its impressive success, DNN requires massive amounts of labeled data for training which is time-consuming and expensive to collect. This issue significantly limits its promotion in the real world. Domain adaptation (DA) solves this problem by building a model utilizing the knowledge of label-rich dataset, i.e., source domain, which generalizes well on the label-scarce dataset, i.e., target domain. 1The PointDA-10 data and official code are uploaded on https://github.com/canqin001/PointDAN ∗Equal Contribution. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Lichen-Figure1 However, due to the shifts of distribution across different domains/datasets, a model trained on one domain usually performs poorly on other domains. Most DA methods address this problem by either mapping original features into a shared subspace or minimizing instance-level distances, such as MMD, CORAL etc., to mix cross-domain features [2, 18, 31]. Currently, inspired by Generative Adversarial Network (GAN) [12], adversarial-training DA methods, like DANN, ADDA, MCD etc., have achieved promising performance in DA and drawn increasing attentions [10, 32, 26]. They deploy a zero-sum game between a discriminator and a generator to learn domain-invariant representations. However, most of the existing DA approaches mainly target on 2D vision tasks, which globally align the distribution shifts between different domains. While for 3D point cloud data, the geometric structures in 3D space can be detailedly described, and different local structures also have clear semantic meaning, such as legs for chairs, which in return combine to form the global semantics for a whole object. As shown in Fig. 1, two 3D objects might be weak to align in global, but would have similar 3D local structures, which are easier to be aligned. So it is urgently desired for a domain adaptation framework to focus on local geometric structures in 3D DA scenario. To this end, this paper introduces a novel point-based Unsupervised Domain Adaptation Network (PointDAN) to achieve unsupervised domain adaptation (UDA) for 3D point cloud data. The key to our approach is to jointly align the multi-scale, i.e., global and local, features of point cloud data in an end-to-end manner. Specifically, the Self-Adaptive (SA) nodes associated with an adjusted receptive field are proposed to dynamically gather and align local features across domains. Moreover, a node attention module is further designed to explore and interpret the relationships between nodes and their contributions in alignment. Meanwhile, an adversarial-training strategy is deployed to globally align the global features. Since there are few benchmarks for DA on 3D data ( i.e., point cloud) before, we build a new benchmark named PointDA-10 dataset for 3D vision DA. It is generated by selecting the samples in 10 overlapped categories among three popular datasets (i.e., ModelNet [35], ShapeNet [3] and ScanNet [5]). In all, the contributions of our paper could be summarized in three folds: • We introduce a novel 3D-point-based unsupervised domain adaptation method by locally and globally align the 3D objects’ distributions across different domains. • For local feature alignment, we propose the Self-Adaptive (SA) nodes with a node attention to utilize local geometric information and dynamically gather regional structures for aligning local distribution across different domains. • We collect a new 3D point cloud DA benchmark, named PointDA-10 dataset, for fair evaluation of 3D DA methods. Extensive experiments on PointDA-10 demonstrate the superiority of our model over the state-of-the-art general-purpose DA methods. 2 Related Works 2.1 3D Vision Understanding Different from 2D vision, 3D vision has various data representation modalities: multi-view, voxel grid, 3D mesh and point cloud data. Deep networks have been employed to deal with the above different formats of 3D data [29, 19, 36, 8]. Among the above modalities, point cloud, represented by a set of points with 3D coordinates {x, y, z}, is the most straightforward representation to preserve 3D spatial information. Point cloud can be directly obtained by LiDAR sensors, which brings a lot of 3D environment understanding applications from scene segmentation to automatic driving. PointNet [22] is the first deep neural networks to directly deal with point clouds, which proposes a symmetry function and a spatial transform network to obtain the invariance to point permutation. However, Lichen-Figure2 local geometric information is vital for describing object in 3D space, which is ignored by PointNet. So recent work mainly focuses on how to effectively utilize local feature. For instance, in PointNet++ [23], a series of PointNet structures are applied to local point sets with varied sizes and local features are gathered in a hierarchical way. PointCNN [17] proposes χ-Conv to aggregate features in local pitches and applies a bottom-up network structure like typical CNNs. In 3D object detection tasks, [41] proposes to divide a large scene into many voxels, where features of inside points are extracted respectively and a 3D Region Proposal Network (RPN) structure is followed to obtain detection prediction. In spite of the broad usage, point cloud data has significant drawbacks in labeling efficiency. During labeling, people need to rotate several times and look through different angles to identify an object. In real-world environment where point cloud data are scanned from LiDAR, it also happens that some parts are lost or occluded (e.g.tables lose legs), which makes efficient labeling more difficult. Under this circumstance, a specific 3D point-based unsupervised domain adaptation method designed to mitigate the domain gap of source labeled data and target unlabeled data is extremely desired. 2.2 Unsupervised Domain Adaptation (UDA) The main challenge of UDA is that distribution shift (i.e., domain gap) exists between the target and source domain. It violates the basic assumption of conventional machine learning algorithms that training samples and test samples sharing the same distribution. To bridge the domain gap, UDA approaches match either the marginal distributions [30, 21, 11, 33] or the conditional distributions [38, 4] between domains via feature alignment. It addresses this problem by learning a mapping function f which projects the raw image features into a shared feature space across domains. Most of them attempt to maximizing the inter-class discrepancy, while minimize the intra-class distance in a subspace simultaneously. Various methods, such as Correlation Alignment (CORAL) [31], Maximum Mean Discrepancy (MMD) [2, 18], or Geodesic distance [13] have been proposed. Apart from the methods aforementioned, many DNN-based domain adaptation methods have been proposed due to their great capacity in representation learning [14, 28, 16]. The key to these methods is to apply DNN to learn domain-invariant features through an end-to-end training scenario. Another kind of approach utilizes adversarial training strategy to obtain the domain invariant representations [10, 32, 7, 24]. It includes a discriminator and a generator where the generator aims to fool the discriminator until the discriminator is unable to distinguish the generated features between the two domains. Such approaches include Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], Maximum Classifier Discrepancy (MCD) [26] etc. Most of UDA methods are designed for 2D vision tasks and focus on the alignment of global image features across different domains. While in 3D data analytical tasks, regional and local geometry information is crucial for achieving good learning performance. Zhou et al. [40] firstly introduced UDA on the task of 3D keypoint estimation relying on the regularization of multi-view consistency term. However, this method cannot be extended to more generalized tasks, i.e., classification. In [27, 34], point cloud data are first projected into 2D images (bird-eye view or front view), and 2D DA methods are applied, which would lose essential 3D geometric information. To this end, we propose a generalized 3D point-based UDA framework. It well preserves the local structures and explores the global correlations of all local features. Adversarial training strategies are further employed to locally and globally align the distribution shifts across the source and target domains. 3 Proposed Model 3.1 Problem Definition and Notation In 3D point-based UDA, we have the access to labeled source domain S = {(xsi , ysi )} ns i=1 where ysi ∈ Y = {1, ..., Y } with ns annotated pairs and target domain T = {xtj} nt j=1 of nt unlabeled data points. The inputs are point cloud data usually represented by 3-dimensional coordinates (x, y, z) where xsi , xtj ∈ X ⊂ RT×3, where T is the number of sampling points of one 3D object, with the same label space Ys = Yt. It is further assumed that two domains are sampled from the distributions Ps(xsi , ysi ) and Pt(xti, yti) respectively while the i.i.d. assumption is violated due to the distribution shift Ps 6= Pt. The key to UDA is to learn a mapping function Φ : X → Rd that projects raw inputs into a shared feature spaceH spreadable for cross-domain samples. 3.2 Local Feature Alignment The local geometric information plays an important role in describing point cloud objects as well as domain alignment. As illustrated in Fig. 1, given the same “table” class, the one from ScanNet misses parts of legs due to the obstacles through LiDAR scanning. The key to align these two “tables” is to extract and match the features of similar structures, i.e., plains, while ignoring the different parts. To utilize the local geometric information, we propose to adaptively select and update key nodes for better fitting the local alignment. Self-Adaptive Node Construction: Here we give the definition of node in point cloud. For each point cloud, we represent its n local geometric structures as n point sets {Sc|Sc = {x̂c, xc1, ..., xck}, x ⊆ R3}nc=1, where the c-th region Sc contains a node x̂c and its surrounding k nearest neighbor points {xc1, ..., xck}. The location of a node decides where the local region is and what points are included. To achieve local features, directly employing the farthest point sampling or random sampling to get the center node is commonly used in previous work [23, 17]. These methods guarantee full coverage over the whole point cloud. However, for domain alignment, it is essential to make sure that these nodes cover the structures of common characteristics in 3D geometric space and drop the parts unique to certain objects. In this way, the local regions sharing similar structures are more proper to be aligned, while the uncommon parts would bring a negative transfer influence. Inspired by deformable convolution in 2D vision [6], we propose a novel geometric-guided shift learning module, which makes the input nodes self-adaptive in receptive field for network. Different from Deformable Convolution where semantic features are used for predicting offset, we utilize the local edge vector as a guidance during learning. As show in Fig. 2, our module transforms semantic information of each edge into its weight and then we aggregate the weighted edge vectors together to obtain our predicted offset direction. Intuitively, the prediction shift is decided by the voting of surrounding edges with different significance. We first initialize the location of node by the farthest point sampling over the point cloud to get n nodes, and their k nearest neighbor points are collected together to form n regions. For the c-th node, its offset is computed as: ∆x̂c = 1 k k∑ j=1 (RT (vcj − v̂c) · (xcj − x̂c)), (1) where x̂ and xcj denote location of node and its neighbor point, so xcj − x̂c means the edge direction. vcj and v̂c are their mid-level point feature extracted from the encoder v = E(x|ΘE) and RT is the weight from one convolution layer for transforming feature. We apply the bottom 3 feature extraction layers of PointNet as the encoder E. ∆x̂c is the predicted location offset of the c-th node. After obtaining learned shift ∆x̂c, we achieve the self-adaptive update of nodes and their regions by adding shift back to node x̂c and finding their k nearest neighbor points: x̂c = x̂c + ∆x̂c, (2) {xc1, ..., xck} = kNN(x̂c|xj , j = 0, ...,M − 1). (3) Then the final node features v̂c is computed by gathering all the point features inside their regions: v̂c = max j=1,..,k RG(vcj). (4) whereRG is the weight of one convolution layer for gathering point features in whichRG ⋃ RT = R, and the output node features are employed for local alignment. For better engaging SA node features, we also interpolate them back into each point following the interpolation strategy in [23] and fuse them with the original point features from a skip connection. The fused feature is input into next-stage generator for higher-level processing. SA Node Attention: Even achieving SA nodes, it is unreasonable to assume that every SA node contributes equally to the goal of domain alignment. The attention module, which is designed to model the relationship between nodes, is necessary for weighting the contributions of different SA nodes for domain alignment and capturing the features in larger spatial scales. Inspired by the channel attention [39], we apply a node attention network to model the contribution of each SA nodes for alignment by introducing a bottleneck network with a residual structure [14]: hc = ϕ(WUδ(WDzc)) · v̂c + v̂c, (5) where zc = E(v̂c(k)) indicates the mean of the c-th node feature. δ(·) and ϕ(·) represent the ReLU function [20] and Sigmoid function respectively. WD is the weight set of a convolutional layer with 1× 1 kernels, which reduces the number of channels with the ratio r. The channel-upscaling layer WU , where WU ⋃ WD =W , increases the channels to its original number with the ratio r. SA Node Feature Alignment: The optimization of both offsets and network parameters for local alignment are sensitive to the disturbance of gradients, which makes GAN-based methods perform unstable. Therefore, we minimize the MMD [2, 18] loss to align cross-domain SA node features as: Lmmd = 1 nsns ns∑ i,j=1 κ(hsi ,h s j) + 1 nsnt ns,nt∑ i,j=1 κ(hsi ,h t j) + 1 ntnt nt∑ i,j=1 κ(hti,h t j), (6) where κ is a kernel function and we apply the Radial Basis Function (RBF) kernel in our model. 3.3 Global Feature Alignment After having the features fi ∈ Rd corresponding to the i-th sample by a generator network, the global feature alignment attempts to minimize the distance between features across different domains. In difference of local feature alignment, global feature alignment process is more stable due to the invariance of receptive field of inputs, which provides more options for choosing GAN-based methods. In this paper, we apply Maximum Classifier Discrepancy (MCD) [26] for global feature alignment due to its outstanding performance in general-purpose domain alignment. The encoder E designed for SA node feature extraction is also applied for extracting raw point cloud features: h̃i = E (xi|ΘE) over the whole object. And the point features are concatenated with interpolated SA-node features as ĥi = [hi, h̃i] to capture the geometry information in multi-scale. Then, we feed the ĥi to the generator network G which is the final convolution layer (i.e., conv4) of PointNet attached with a global max-pooling to achieve high-level global feature fi = max− pooling(G(ĥi|ΘG)), where fi ∈ Rd represents the global feature of the i-th sample. And d is usually assigned as 1,024. The global alignment module attempts to align domains with two classifier networks F1 and F2 to keep the discriminative features given the support of source domain decision boundaries. The two classifiers F1 and F2 take the features fi and classify them into K classes as pj(yi|xi) = Fj ( fi|ΘjF ) , j = 1, 2, where pj(yi|xi) is the K-dimensional probabilistic softmax results of classifiers. To train the model, the total loss is composed of two parts: the task loss and discrepancy loss. Similar as most UDA methods, the object of task loss is to minimize the empirical risk on source domain {Xs, Ys}, which is formulated as follows: Lcls(Xs, Ys) = −E(xs,ys)∼(Xs,Ys) K∑ k=1 1[k=ys]log(p((y = ys)|G(E(xs|ΘE)|ΘG))). (7) The discrepancy loss is calculated as the l1 distance between the softmax scores of two classifiers: Ldis(xt) = Ext∼Xt [|p1(y|xt)− p2(y|xt)|]. (8) 3.4 Training Procedure We apply the Back-Propagation [25] to optimize the whole framework under the end-to-end training scenario. The training process is composed of two steps in total: Step1. Firstly, it is required to train two classifiers F1 and F2 with the discrepancy loss Ldis in Eq. (8) and classification loss Lcls obtained in Eq. (7). The discrepancy loss, which requires to be maximized, helps gather target features given the support of the source domain. The classification loss is applied to minimize the empirical risk on source domain. The objective function is as follows: min F1,F2 Lcls − λLdis. (9) Step2. In this step, we train the generator G, encoder E, the node attention network W and transform network R by minimizing the discrepancy loss, classification loss and MMD loss to achieve discriminative and domain-invariant features. The objective function in this step is formulated as: min G,E,W,R Lcls + λLdis + βLmmd, (10) where both λ and β are hyper-parameters which manually assigned as 1. 3.5 Theoretical Analysis In this section, we analyze our method in terms of theH∆H- distance theory [1]. TheH∆H-distance is defined as dH∆H(S, T ) = 2 sup h1,h2∈H |Px∼S [h1(x) 6= h2(x)]− Px∼T [h1(x 6= h2(x))]| , (11) which represents the discrepancy between the target and source distributions, T and S , with regard to the hypothesis classH. According to [1], the error of classifier h on the target domain T (h) can be bounded by the sum of the source domain error S(h), theH∆H- distance and a constant C which is independent of h, i.e., T (h) ≤ S(h) + 1 2 dH∆H(S, T ) + C. (12) The relationship between our method and theH∆H- distance will be discussed in the following. The H∆H- distance can also be denoted as below: dH∆H(S, T ) = 2 sup h1,h2∈H ∣∣Ex∼S1[h1(x)6=h2(x)] − Ex∼T 1[h1(x)6=h2(x)]∣∣ . (13) As the term Ex∼S1[h1(x)6=h2(x)] would be very small if h1 and h2 can classify samples over S correctly. In our case, p1 and p2 correspond to h1 and h2 respectively, which agree on their predictions on source samples S. As a result, dH∆H(S, T ) can be approximately calculated by suph1,h2∈H Ex∼T 1[h1(x)6=h2(x)], which is the supremum of Ldis in our problem. If decomposing the hypothesis h1 into G and F1, and h2 into G and F2, and fix G, we can get sup h1,h2∈H Ex∼T 1[h1(x)6=h2(x)] = sup F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (14) Further, we replace sup with max, and attempt to minimize (14) with respect to G: min G max F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (15) Problem (15) is similar to the problem (9,10) in our method. Consider the discrepancy loss Ldis, we first train classifiers F1, F2 to maximize Ldis on the target domain and next train generator G to minimize Ldis, which matches with problem (15). Although we also need consider the source loss Lcls and MMD loss Lmmd, we can see from [1] that our method still has a close connection to the H∆H- distance. Thus, by iteratively train F1, F2 and G, we can effectively reduce dH∆H(S, T ), and further lead to the better approximate T (h) by S(h). Table 1: Number of samples in proposed datasets. Dataset Bathtub Bed Bookshelf Cabinet Chair Lamp Monitor Plant Sofa Table Total M Train 106 515 572 200 889 124 465 240 680 392 4, 183Test 50 100 100 86 100 20 100 100 100 100 856 S Train 599 167 310 1, 076 4, 612 1, 620 762 158 2, 198 5, 876 17, 378Test 85 23 50 126 662 232 112 30 330 842 2, 492 S* Train 98 329 464 650 2, 578 161 210 88 495 1, 037 6, 110Test 26 85 146 149 801 41 61 25 134 301 1, 769 4 PointDA-10 Dataset Bathtub Bed Bookshelf Cabinet Chair Plant Sofa Table ModelNet-10 ShapeNet-10 ScanNet-10 ModelNet-10 ShapeNet-10 ScanNet-10 Lamp Monitor Figure 3: Samples of PointDA-10 dataset. As there is no 3D point cloud benchmark designed for domain adaptation, we propose three datasets with different characteristics, i.e., ModelNet10, ShapeNet-10, ScanNet-10, for the evaluation of point cloud DA methods. To build them, we extract the samples in 10 shared classes from ModelNet40 [35], ShapeNet [3] and ScanNet [5] respectively. The statistic and visualization are shown in Table 1 and Fig. 3. Given the access to the three subdatasets, we organize six types of adaptation scenarios which are M→ S, M→ S*, S→ M, S→ S*, S*→ M and S*→ S respectively. ModelNet-10 (M): ModelNet40 con- tains clean 3D CAD models of 40 categories. To extract overlapped classes, we regard ’nightstand’ class in ModelNet40 as ’cabinet’ class in ModelNet-10, because these two objects almost share the same structure. After getting the CAD model, we sample points on the surface as [23] to fully cover the object. ShapeNet-10 (S): ShapeNetCore contains 3D CAD models of 55 categories gathered from online repositories. ShapeNet contains more samples and its objects have larger variance in structure compared with ModelNet. We apply uniform sampling to collect the points of ShapeNet on surface, which, compared with ModelNet, may lose some marginal points. ScanNet-10 (S*): ScanNet contains scanned and reconstructed real-world indoor scenes. We isolate 10 classes instances contained in annotated bounding boxes for classification. The objects often lose some parts and get occluded by surroundings. ScanNet is a challenging but realistic domain. 5 Experiments 5.1 Experiments Setup In this section, we evaluate the proposed method under the standard protocol [11] of unsupervised domain adaptation on the task of point cloud data classification. Implementation Details: We choose the PointNet [22] as the backbone of Encoder E and Generator G and apply a two-layer multilayer perceptron (MLP) as F1 and F2. The proposed approach is implemented on PyTorch with Adam [15] as the optimizer and a NVIDIA TITAN GPU for training. The learning rate is assigned as 0.0001 under the weight decay 0.0005. All models have been trained for 200 epochs of batch size 64. We extract the SA node features from the third convolution layer (i.e., conv3) for local-level alignment and the number of SA node is assigned as 64. Baselines: We compare the proposed method with a serial of general-purpose UDA methods including: Maximum Mean Discrepancy (MMD) [18], Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], and Maximum Classifier Discrep- ancy (MCD) [26]. During these experiments, we take the same loss and the same training policy. w/o Adapt refers to the model trained only by source samples and Supervised means fully supervised method. Ablation Study Setup: To analyze the effects of each module, we introduce the ablation study which is composed of four components: global feature alignment,i.e., G, local feature alignment, i.e., L, SA node attention ,i.e., A, and the self-training [42], i.e., P, to finetune the model with 10% pseudo target labels generated from the target samples with the highest softmax scores. Evaluation: Given the labeled samples in source domain and unlabeled samples from target domain for training, all the models would be evaluated on the test set of target domain. All the experiments have been repeated three times and we then report the average top-1 classification accuracy in all tables. 5.2 Classification Results on PointDA-10 Dataset The quantitative results and comparison on PointDA-10 dataset are summarized in Table 2. The proposed methods outperform all the general-purpose baseline methods on all adaptation scenarios. Although the largest domain gap appears on M→ S* and S→ S*, ours exhibit the large improvement which demonstrates its superiority in aligning different domains. In comparison to the baseline methods, MMD, although defeated by GAN-based methods in 2D vision tasks, is only slightly inferior and even outperforms them in some domain pairs. The phenomenon could be explained as global features limit the upper bound due to its weakness in representing diversified geometry information. In addition, there still exists a great margin between supervised method and DA methods. The Table 3 represents the class-wise classification results on the domain pair M→ S. Local alignment helps boost the performance on most of the classes, especially for Monitor and Chair. However, some of the objects, i.e., sofa and bed, are quite challenging for recognition under the UDA scenario where the negative transfer happens as the performance could drop on these classes. Moreover, we observed that the imbalanced training samples do affect the performance of our model and other domain adaptation (DA) models, which makes Table 3 slightly noisy. Chair, Table, and Sofa (easily confusing with Bed) cover more than 60% samples in M-to-S scenario which causes the drop of certain classes (e.g., Bed and Sofa). 5.3 Quantitative Analysis Ablation Study: We further analyze the effect of four components proposed in our model (i.e., G, L, S, A). From the Table 2, we find that together with SA node, adding local alignment will bring significant improvement, but only local alignment with fixed node wouldn’t improve a lot. Above results substantially validate the effectiveness of our SA nodes that attributes to its self-adapt in region receptive field and significant weight. And an interesting phenomenon in Table 3 is that the full version method is defeated by G+L+A in class-wise accuracy. It means that inference of pseudo labels is easily influenced by imbalance distribution of samples in different classes where certain classes would dominate the process of self-training and cause errors accumulation. Convergence: We evaluate the convergence of proposed methods as well as baseline methods on ModelNet-to-ShapeNet in Fig. 4(d). Compared with baselines methods, local alignment helps accelerate the convergence and makes them more stable since being convergent. SA Node Feature Extraction Layer: The influence of different layers for mid-level feature extraction is analyzed in Fig. 4(c) on M→ S and S*→M. Compared with conv1 and conv2 whose features are less semantical, conv3 contains the best mid-level feature for local alignment. 5.4 Results Visualization We visualize the top contributed SA nodes for local alignment of two cross-domain objects to interpret the effectiveness of local feature alignment in Fig. 4(a)-4(b). The matched nodes are selected from the elements with the highest values from the matrix M = hsi ×htj > ∈ R64×64 obtained from Eq. 5. It is easily observed that the SA nodes representing similar geometry structure, i.e., legs, plains, contribute most to local alignment no matter they are between same objects or different objects across domains. It significantly demonstrates the common knowledge learned by SA nodes for local alignment. 6 Conclusion In this paper, we propose a novel 3D Unsupervised Domain Adaptation Network on Point Cloud Data (PointDAN). PointDAN is a specifically designed framework based on multi-scale feature alignment. For local feature alignment, we introduce Self-Adaptive (SA) nodes to represent common geometry structure across domains and apply a GAN-based method to align features globally. To evaluate the proposed model, we build a new 3D domain adaptation benchmark. In the experiments, we have demonstrated the superiority of our approach over the state-of-the-art domain adaptation methods. Acknowledgements We thank Qianqian Ma from Boston University for her helpful theoretical insights and comments for our work.
1. What is the main contribution of the paper, and how does it combine existing techniques? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to improve predictive performance? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the limitations of the paper's contribution, and how might it be perceived by the community? 5. Are there any minor issues or suggestions for improvement in the paper's presentation or formatting?
Review
Review # Originality The submission mostly combines the domain adaption loss Maximum Classifier Discrepancy [23] with additional learned local features "Local Feature Alignment" to point cloud classification tasks with unlabeled target domain samples. The driving classification architecture is borrowed from PointNet. The main contribution here lies in the local features that bring up the predictive performance on the target task: small regions, which are centered around sample point cloud points, are first moved with a learned offset (to better support commonalities of the current object) and then weighted by an attention network (to identify important features). The features of these regions are derived from early stages of a PointNet architecture. The final local features are then fed into later layers of a PointNet architecture for classification. The training is done by alternating the training steps from the publication of Maximum Classifier Discrepancy [23]. # Quality The ablation study shows that on average across multiple domain adaptation tasks the added adaptable local features seem to improve over a direct application of general-purpose domain adaptation techniques. However, the effect on different classes seems to vary. # Clarity The description of the architecture and methodology are clear enough. # Significance The contribution -- though successful -- might be of limited significance to the community for mostly two reasons: the derived local feature alignment seems to be mostly a learned weighting and offseting of PointNet features, and the success across classes as shown in table 3 seems noisy; some classes profit from the proposed method (e.g., cabinet) and some don't (e.g., lamp). Minor fixes: - line 25: systems? - line 156, eq 2: Maybe rewriting the equation in the style of an assignment would make sense here? - line 212, eq 10: missing closing parenthesis for h_1(x)? - table 3: MCD and table: Probably remove '1c' here?
NIPS
Title PointDAN: A Multi-Scale 3D Domain Adaption Network for Point Cloud Representation Abstract Can Qin∗, Haoxuan You∗, Lichen Wang, C.-C. Jay Kuo, Yun Fu Department of Electrical & Computer Engineering, Northeastern University Department of Computer Science, Columbia University Department of Electrical and Computer Engineering, University of Southern California Khoury College of Computer Science, Northeastern University [email protected], [email protected], [email protected], [email protected], [email protected] 1 Introduction 3D vision has achieved promising outcomes in wide-ranging real-world applications (i.e., autonomous cars, robots, and surveillance system). Enormous amounts of 3D point cloud data is captured by depth cameras or LiDAR sensors nowadays. Sophisticated 3D vision and machine learning algorithms are required to analyze its content for further exploitation. Recently, the advent of Deep Neural Network (DNN) has greatly boosted the performance of 3D vision understanding including tasks of classification, detection, and segmentation[22, 9, 37, 41]. Despite its impressive success, DNN requires massive amounts of labeled data for training which is time-consuming and expensive to collect. This issue significantly limits its promotion in the real world. Domain adaptation (DA) solves this problem by building a model utilizing the knowledge of label-rich dataset, i.e., source domain, which generalizes well on the label-scarce dataset, i.e., target domain. 1The PointDA-10 data and official code are uploaded on https://github.com/canqin001/PointDAN ∗Equal Contribution. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Lichen-Figure1 However, due to the shifts of distribution across different domains/datasets, a model trained on one domain usually performs poorly on other domains. Most DA methods address this problem by either mapping original features into a shared subspace or minimizing instance-level distances, such as MMD, CORAL etc., to mix cross-domain features [2, 18, 31]. Currently, inspired by Generative Adversarial Network (GAN) [12], adversarial-training DA methods, like DANN, ADDA, MCD etc., have achieved promising performance in DA and drawn increasing attentions [10, 32, 26]. They deploy a zero-sum game between a discriminator and a generator to learn domain-invariant representations. However, most of the existing DA approaches mainly target on 2D vision tasks, which globally align the distribution shifts between different domains. While for 3D point cloud data, the geometric structures in 3D space can be detailedly described, and different local structures also have clear semantic meaning, such as legs for chairs, which in return combine to form the global semantics for a whole object. As shown in Fig. 1, two 3D objects might be weak to align in global, but would have similar 3D local structures, which are easier to be aligned. So it is urgently desired for a domain adaptation framework to focus on local geometric structures in 3D DA scenario. To this end, this paper introduces a novel point-based Unsupervised Domain Adaptation Network (PointDAN) to achieve unsupervised domain adaptation (UDA) for 3D point cloud data. The key to our approach is to jointly align the multi-scale, i.e., global and local, features of point cloud data in an end-to-end manner. Specifically, the Self-Adaptive (SA) nodes associated with an adjusted receptive field are proposed to dynamically gather and align local features across domains. Moreover, a node attention module is further designed to explore and interpret the relationships between nodes and their contributions in alignment. Meanwhile, an adversarial-training strategy is deployed to globally align the global features. Since there are few benchmarks for DA on 3D data ( i.e., point cloud) before, we build a new benchmark named PointDA-10 dataset for 3D vision DA. It is generated by selecting the samples in 10 overlapped categories among three popular datasets (i.e., ModelNet [35], ShapeNet [3] and ScanNet [5]). In all, the contributions of our paper could be summarized in three folds: • We introduce a novel 3D-point-based unsupervised domain adaptation method by locally and globally align the 3D objects’ distributions across different domains. • For local feature alignment, we propose the Self-Adaptive (SA) nodes with a node attention to utilize local geometric information and dynamically gather regional structures for aligning local distribution across different domains. • We collect a new 3D point cloud DA benchmark, named PointDA-10 dataset, for fair evaluation of 3D DA methods. Extensive experiments on PointDA-10 demonstrate the superiority of our model over the state-of-the-art general-purpose DA methods. 2 Related Works 2.1 3D Vision Understanding Different from 2D vision, 3D vision has various data representation modalities: multi-view, voxel grid, 3D mesh and point cloud data. Deep networks have been employed to deal with the above different formats of 3D data [29, 19, 36, 8]. Among the above modalities, point cloud, represented by a set of points with 3D coordinates {x, y, z}, is the most straightforward representation to preserve 3D spatial information. Point cloud can be directly obtained by LiDAR sensors, which brings a lot of 3D environment understanding applications from scene segmentation to automatic driving. PointNet [22] is the first deep neural networks to directly deal with point clouds, which proposes a symmetry function and a spatial transform network to obtain the invariance to point permutation. However, Lichen-Figure2 local geometric information is vital for describing object in 3D space, which is ignored by PointNet. So recent work mainly focuses on how to effectively utilize local feature. For instance, in PointNet++ [23], a series of PointNet structures are applied to local point sets with varied sizes and local features are gathered in a hierarchical way. PointCNN [17] proposes χ-Conv to aggregate features in local pitches and applies a bottom-up network structure like typical CNNs. In 3D object detection tasks, [41] proposes to divide a large scene into many voxels, where features of inside points are extracted respectively and a 3D Region Proposal Network (RPN) structure is followed to obtain detection prediction. In spite of the broad usage, point cloud data has significant drawbacks in labeling efficiency. During labeling, people need to rotate several times and look through different angles to identify an object. In real-world environment where point cloud data are scanned from LiDAR, it also happens that some parts are lost or occluded (e.g.tables lose legs), which makes efficient labeling more difficult. Under this circumstance, a specific 3D point-based unsupervised domain adaptation method designed to mitigate the domain gap of source labeled data and target unlabeled data is extremely desired. 2.2 Unsupervised Domain Adaptation (UDA) The main challenge of UDA is that distribution shift (i.e., domain gap) exists between the target and source domain. It violates the basic assumption of conventional machine learning algorithms that training samples and test samples sharing the same distribution. To bridge the domain gap, UDA approaches match either the marginal distributions [30, 21, 11, 33] or the conditional distributions [38, 4] between domains via feature alignment. It addresses this problem by learning a mapping function f which projects the raw image features into a shared feature space across domains. Most of them attempt to maximizing the inter-class discrepancy, while minimize the intra-class distance in a subspace simultaneously. Various methods, such as Correlation Alignment (CORAL) [31], Maximum Mean Discrepancy (MMD) [2, 18], or Geodesic distance [13] have been proposed. Apart from the methods aforementioned, many DNN-based domain adaptation methods have been proposed due to their great capacity in representation learning [14, 28, 16]. The key to these methods is to apply DNN to learn domain-invariant features through an end-to-end training scenario. Another kind of approach utilizes adversarial training strategy to obtain the domain invariant representations [10, 32, 7, 24]. It includes a discriminator and a generator where the generator aims to fool the discriminator until the discriminator is unable to distinguish the generated features between the two domains. Such approaches include Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], Maximum Classifier Discrepancy (MCD) [26] etc. Most of UDA methods are designed for 2D vision tasks and focus on the alignment of global image features across different domains. While in 3D data analytical tasks, regional and local geometry information is crucial for achieving good learning performance. Zhou et al. [40] firstly introduced UDA on the task of 3D keypoint estimation relying on the regularization of multi-view consistency term. However, this method cannot be extended to more generalized tasks, i.e., classification. In [27, 34], point cloud data are first projected into 2D images (bird-eye view or front view), and 2D DA methods are applied, which would lose essential 3D geometric information. To this end, we propose a generalized 3D point-based UDA framework. It well preserves the local structures and explores the global correlations of all local features. Adversarial training strategies are further employed to locally and globally align the distribution shifts across the source and target domains. 3 Proposed Model 3.1 Problem Definition and Notation In 3D point-based UDA, we have the access to labeled source domain S = {(xsi , ysi )} ns i=1 where ysi ∈ Y = {1, ..., Y } with ns annotated pairs and target domain T = {xtj} nt j=1 of nt unlabeled data points. The inputs are point cloud data usually represented by 3-dimensional coordinates (x, y, z) where xsi , xtj ∈ X ⊂ RT×3, where T is the number of sampling points of one 3D object, with the same label space Ys = Yt. It is further assumed that two domains are sampled from the distributions Ps(xsi , ysi ) and Pt(xti, yti) respectively while the i.i.d. assumption is violated due to the distribution shift Ps 6= Pt. The key to UDA is to learn a mapping function Φ : X → Rd that projects raw inputs into a shared feature spaceH spreadable for cross-domain samples. 3.2 Local Feature Alignment The local geometric information plays an important role in describing point cloud objects as well as domain alignment. As illustrated in Fig. 1, given the same “table” class, the one from ScanNet misses parts of legs due to the obstacles through LiDAR scanning. The key to align these two “tables” is to extract and match the features of similar structures, i.e., plains, while ignoring the different parts. To utilize the local geometric information, we propose to adaptively select and update key nodes for better fitting the local alignment. Self-Adaptive Node Construction: Here we give the definition of node in point cloud. For each point cloud, we represent its n local geometric structures as n point sets {Sc|Sc = {x̂c, xc1, ..., xck}, x ⊆ R3}nc=1, where the c-th region Sc contains a node x̂c and its surrounding k nearest neighbor points {xc1, ..., xck}. The location of a node decides where the local region is and what points are included. To achieve local features, directly employing the farthest point sampling or random sampling to get the center node is commonly used in previous work [23, 17]. These methods guarantee full coverage over the whole point cloud. However, for domain alignment, it is essential to make sure that these nodes cover the structures of common characteristics in 3D geometric space and drop the parts unique to certain objects. In this way, the local regions sharing similar structures are more proper to be aligned, while the uncommon parts would bring a negative transfer influence. Inspired by deformable convolution in 2D vision [6], we propose a novel geometric-guided shift learning module, which makes the input nodes self-adaptive in receptive field for network. Different from Deformable Convolution where semantic features are used for predicting offset, we utilize the local edge vector as a guidance during learning. As show in Fig. 2, our module transforms semantic information of each edge into its weight and then we aggregate the weighted edge vectors together to obtain our predicted offset direction. Intuitively, the prediction shift is decided by the voting of surrounding edges with different significance. We first initialize the location of node by the farthest point sampling over the point cloud to get n nodes, and their k nearest neighbor points are collected together to form n regions. For the c-th node, its offset is computed as: ∆x̂c = 1 k k∑ j=1 (RT (vcj − v̂c) · (xcj − x̂c)), (1) where x̂ and xcj denote location of node and its neighbor point, so xcj − x̂c means the edge direction. vcj and v̂c are their mid-level point feature extracted from the encoder v = E(x|ΘE) and RT is the weight from one convolution layer for transforming feature. We apply the bottom 3 feature extraction layers of PointNet as the encoder E. ∆x̂c is the predicted location offset of the c-th node. After obtaining learned shift ∆x̂c, we achieve the self-adaptive update of nodes and their regions by adding shift back to node x̂c and finding their k nearest neighbor points: x̂c = x̂c + ∆x̂c, (2) {xc1, ..., xck} = kNN(x̂c|xj , j = 0, ...,M − 1). (3) Then the final node features v̂c is computed by gathering all the point features inside their regions: v̂c = max j=1,..,k RG(vcj). (4) whereRG is the weight of one convolution layer for gathering point features in whichRG ⋃ RT = R, and the output node features are employed for local alignment. For better engaging SA node features, we also interpolate them back into each point following the interpolation strategy in [23] and fuse them with the original point features from a skip connection. The fused feature is input into next-stage generator for higher-level processing. SA Node Attention: Even achieving SA nodes, it is unreasonable to assume that every SA node contributes equally to the goal of domain alignment. The attention module, which is designed to model the relationship between nodes, is necessary for weighting the contributions of different SA nodes for domain alignment and capturing the features in larger spatial scales. Inspired by the channel attention [39], we apply a node attention network to model the contribution of each SA nodes for alignment by introducing a bottleneck network with a residual structure [14]: hc = ϕ(WUδ(WDzc)) · v̂c + v̂c, (5) where zc = E(v̂c(k)) indicates the mean of the c-th node feature. δ(·) and ϕ(·) represent the ReLU function [20] and Sigmoid function respectively. WD is the weight set of a convolutional layer with 1× 1 kernels, which reduces the number of channels with the ratio r. The channel-upscaling layer WU , where WU ⋃ WD =W , increases the channels to its original number with the ratio r. SA Node Feature Alignment: The optimization of both offsets and network parameters for local alignment are sensitive to the disturbance of gradients, which makes GAN-based methods perform unstable. Therefore, we minimize the MMD [2, 18] loss to align cross-domain SA node features as: Lmmd = 1 nsns ns∑ i,j=1 κ(hsi ,h s j) + 1 nsnt ns,nt∑ i,j=1 κ(hsi ,h t j) + 1 ntnt nt∑ i,j=1 κ(hti,h t j), (6) where κ is a kernel function and we apply the Radial Basis Function (RBF) kernel in our model. 3.3 Global Feature Alignment After having the features fi ∈ Rd corresponding to the i-th sample by a generator network, the global feature alignment attempts to minimize the distance between features across different domains. In difference of local feature alignment, global feature alignment process is more stable due to the invariance of receptive field of inputs, which provides more options for choosing GAN-based methods. In this paper, we apply Maximum Classifier Discrepancy (MCD) [26] for global feature alignment due to its outstanding performance in general-purpose domain alignment. The encoder E designed for SA node feature extraction is also applied for extracting raw point cloud features: h̃i = E (xi|ΘE) over the whole object. And the point features are concatenated with interpolated SA-node features as ĥi = [hi, h̃i] to capture the geometry information in multi-scale. Then, we feed the ĥi to the generator network G which is the final convolution layer (i.e., conv4) of PointNet attached with a global max-pooling to achieve high-level global feature fi = max− pooling(G(ĥi|ΘG)), where fi ∈ Rd represents the global feature of the i-th sample. And d is usually assigned as 1,024. The global alignment module attempts to align domains with two classifier networks F1 and F2 to keep the discriminative features given the support of source domain decision boundaries. The two classifiers F1 and F2 take the features fi and classify them into K classes as pj(yi|xi) = Fj ( fi|ΘjF ) , j = 1, 2, where pj(yi|xi) is the K-dimensional probabilistic softmax results of classifiers. To train the model, the total loss is composed of two parts: the task loss and discrepancy loss. Similar as most UDA methods, the object of task loss is to minimize the empirical risk on source domain {Xs, Ys}, which is formulated as follows: Lcls(Xs, Ys) = −E(xs,ys)∼(Xs,Ys) K∑ k=1 1[k=ys]log(p((y = ys)|G(E(xs|ΘE)|ΘG))). (7) The discrepancy loss is calculated as the l1 distance between the softmax scores of two classifiers: Ldis(xt) = Ext∼Xt [|p1(y|xt)− p2(y|xt)|]. (8) 3.4 Training Procedure We apply the Back-Propagation [25] to optimize the whole framework under the end-to-end training scenario. The training process is composed of two steps in total: Step1. Firstly, it is required to train two classifiers F1 and F2 with the discrepancy loss Ldis in Eq. (8) and classification loss Lcls obtained in Eq. (7). The discrepancy loss, which requires to be maximized, helps gather target features given the support of the source domain. The classification loss is applied to minimize the empirical risk on source domain. The objective function is as follows: min F1,F2 Lcls − λLdis. (9) Step2. In this step, we train the generator G, encoder E, the node attention network W and transform network R by minimizing the discrepancy loss, classification loss and MMD loss to achieve discriminative and domain-invariant features. The objective function in this step is formulated as: min G,E,W,R Lcls + λLdis + βLmmd, (10) where both λ and β are hyper-parameters which manually assigned as 1. 3.5 Theoretical Analysis In this section, we analyze our method in terms of theH∆H- distance theory [1]. TheH∆H-distance is defined as dH∆H(S, T ) = 2 sup h1,h2∈H |Px∼S [h1(x) 6= h2(x)]− Px∼T [h1(x 6= h2(x))]| , (11) which represents the discrepancy between the target and source distributions, T and S , with regard to the hypothesis classH. According to [1], the error of classifier h on the target domain T (h) can be bounded by the sum of the source domain error S(h), theH∆H- distance and a constant C which is independent of h, i.e., T (h) ≤ S(h) + 1 2 dH∆H(S, T ) + C. (12) The relationship between our method and theH∆H- distance will be discussed in the following. The H∆H- distance can also be denoted as below: dH∆H(S, T ) = 2 sup h1,h2∈H ∣∣Ex∼S1[h1(x)6=h2(x)] − Ex∼T 1[h1(x)6=h2(x)]∣∣ . (13) As the term Ex∼S1[h1(x)6=h2(x)] would be very small if h1 and h2 can classify samples over S correctly. In our case, p1 and p2 correspond to h1 and h2 respectively, which agree on their predictions on source samples S. As a result, dH∆H(S, T ) can be approximately calculated by suph1,h2∈H Ex∼T 1[h1(x)6=h2(x)], which is the supremum of Ldis in our problem. If decomposing the hypothesis h1 into G and F1, and h2 into G and F2, and fix G, we can get sup h1,h2∈H Ex∼T 1[h1(x)6=h2(x)] = sup F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (14) Further, we replace sup with max, and attempt to minimize (14) with respect to G: min G max F1,F2 Ex∼T 1[F1◦G(x)6=F2◦G(x)]. (15) Problem (15) is similar to the problem (9,10) in our method. Consider the discrepancy loss Ldis, we first train classifiers F1, F2 to maximize Ldis on the target domain and next train generator G to minimize Ldis, which matches with problem (15). Although we also need consider the source loss Lcls and MMD loss Lmmd, we can see from [1] that our method still has a close connection to the H∆H- distance. Thus, by iteratively train F1, F2 and G, we can effectively reduce dH∆H(S, T ), and further lead to the better approximate T (h) by S(h). Table 1: Number of samples in proposed datasets. Dataset Bathtub Bed Bookshelf Cabinet Chair Lamp Monitor Plant Sofa Table Total M Train 106 515 572 200 889 124 465 240 680 392 4, 183Test 50 100 100 86 100 20 100 100 100 100 856 S Train 599 167 310 1, 076 4, 612 1, 620 762 158 2, 198 5, 876 17, 378Test 85 23 50 126 662 232 112 30 330 842 2, 492 S* Train 98 329 464 650 2, 578 161 210 88 495 1, 037 6, 110Test 26 85 146 149 801 41 61 25 134 301 1, 769 4 PointDA-10 Dataset Bathtub Bed Bookshelf Cabinet Chair Plant Sofa Table ModelNet-10 ShapeNet-10 ScanNet-10 ModelNet-10 ShapeNet-10 ScanNet-10 Lamp Monitor Figure 3: Samples of PointDA-10 dataset. As there is no 3D point cloud benchmark designed for domain adaptation, we propose three datasets with different characteristics, i.e., ModelNet10, ShapeNet-10, ScanNet-10, for the evaluation of point cloud DA methods. To build them, we extract the samples in 10 shared classes from ModelNet40 [35], ShapeNet [3] and ScanNet [5] respectively. The statistic and visualization are shown in Table 1 and Fig. 3. Given the access to the three subdatasets, we organize six types of adaptation scenarios which are M→ S, M→ S*, S→ M, S→ S*, S*→ M and S*→ S respectively. ModelNet-10 (M): ModelNet40 con- tains clean 3D CAD models of 40 categories. To extract overlapped classes, we regard ’nightstand’ class in ModelNet40 as ’cabinet’ class in ModelNet-10, because these two objects almost share the same structure. After getting the CAD model, we sample points on the surface as [23] to fully cover the object. ShapeNet-10 (S): ShapeNetCore contains 3D CAD models of 55 categories gathered from online repositories. ShapeNet contains more samples and its objects have larger variance in structure compared with ModelNet. We apply uniform sampling to collect the points of ShapeNet on surface, which, compared with ModelNet, may lose some marginal points. ScanNet-10 (S*): ScanNet contains scanned and reconstructed real-world indoor scenes. We isolate 10 classes instances contained in annotated bounding boxes for classification. The objects often lose some parts and get occluded by surroundings. ScanNet is a challenging but realistic domain. 5 Experiments 5.1 Experiments Setup In this section, we evaluate the proposed method under the standard protocol [11] of unsupervised domain adaptation on the task of point cloud data classification. Implementation Details: We choose the PointNet [22] as the backbone of Encoder E and Generator G and apply a two-layer multilayer perceptron (MLP) as F1 and F2. The proposed approach is implemented on PyTorch with Adam [15] as the optimizer and a NVIDIA TITAN GPU for training. The learning rate is assigned as 0.0001 under the weight decay 0.0005. All models have been trained for 200 epochs of batch size 64. We extract the SA node features from the third convolution layer (i.e., conv3) for local-level alignment and the number of SA node is assigned as 64. Baselines: We compare the proposed method with a serial of general-purpose UDA methods including: Maximum Mean Discrepancy (MMD) [18], Adversarial Discriminative Domain Adaptation (ADDA) [32], Domain Adversarial Neural Network (DANN) [10], and Maximum Classifier Discrep- ancy (MCD) [26]. During these experiments, we take the same loss and the same training policy. w/o Adapt refers to the model trained only by source samples and Supervised means fully supervised method. Ablation Study Setup: To analyze the effects of each module, we introduce the ablation study which is composed of four components: global feature alignment,i.e., G, local feature alignment, i.e., L, SA node attention ,i.e., A, and the self-training [42], i.e., P, to finetune the model with 10% pseudo target labels generated from the target samples with the highest softmax scores. Evaluation: Given the labeled samples in source domain and unlabeled samples from target domain for training, all the models would be evaluated on the test set of target domain. All the experiments have been repeated three times and we then report the average top-1 classification accuracy in all tables. 5.2 Classification Results on PointDA-10 Dataset The quantitative results and comparison on PointDA-10 dataset are summarized in Table 2. The proposed methods outperform all the general-purpose baseline methods on all adaptation scenarios. Although the largest domain gap appears on M→ S* and S→ S*, ours exhibit the large improvement which demonstrates its superiority in aligning different domains. In comparison to the baseline methods, MMD, although defeated by GAN-based methods in 2D vision tasks, is only slightly inferior and even outperforms them in some domain pairs. The phenomenon could be explained as global features limit the upper bound due to its weakness in representing diversified geometry information. In addition, there still exists a great margin between supervised method and DA methods. The Table 3 represents the class-wise classification results on the domain pair M→ S. Local alignment helps boost the performance on most of the classes, especially for Monitor and Chair. However, some of the objects, i.e., sofa and bed, are quite challenging for recognition under the UDA scenario where the negative transfer happens as the performance could drop on these classes. Moreover, we observed that the imbalanced training samples do affect the performance of our model and other domain adaptation (DA) models, which makes Table 3 slightly noisy. Chair, Table, and Sofa (easily confusing with Bed) cover more than 60% samples in M-to-S scenario which causes the drop of certain classes (e.g., Bed and Sofa). 5.3 Quantitative Analysis Ablation Study: We further analyze the effect of four components proposed in our model (i.e., G, L, S, A). From the Table 2, we find that together with SA node, adding local alignment will bring significant improvement, but only local alignment with fixed node wouldn’t improve a lot. Above results substantially validate the effectiveness of our SA nodes that attributes to its self-adapt in region receptive field and significant weight. And an interesting phenomenon in Table 3 is that the full version method is defeated by G+L+A in class-wise accuracy. It means that inference of pseudo labels is easily influenced by imbalance distribution of samples in different classes where certain classes would dominate the process of self-training and cause errors accumulation. Convergence: We evaluate the convergence of proposed methods as well as baseline methods on ModelNet-to-ShapeNet in Fig. 4(d). Compared with baselines methods, local alignment helps accelerate the convergence and makes them more stable since being convergent. SA Node Feature Extraction Layer: The influence of different layers for mid-level feature extraction is analyzed in Fig. 4(c) on M→ S and S*→M. Compared with conv1 and conv2 whose features are less semantical, conv3 contains the best mid-level feature for local alignment. 5.4 Results Visualization We visualize the top contributed SA nodes for local alignment of two cross-domain objects to interpret the effectiveness of local feature alignment in Fig. 4(a)-4(b). The matched nodes are selected from the elements with the highest values from the matrix M = hsi ×htj > ∈ R64×64 obtained from Eq. 5. It is easily observed that the SA nodes representing similar geometry structure, i.e., legs, plains, contribute most to local alignment no matter they are between same objects or different objects across domains. It significantly demonstrates the common knowledge learned by SA nodes for local alignment. 6 Conclusion In this paper, we propose a novel 3D Unsupervised Domain Adaptation Network on Point Cloud Data (PointDAN). PointDAN is a specifically designed framework based on multi-scale feature alignment. For local feature alignment, we introduce Self-Adaptive (SA) nodes to represent common geometry structure across domains and apply a GAN-based method to align features globally. To evaluate the proposed model, we build a new 3D domain adaptation benchmark. In the experiments, we have demonstrated the superiority of our approach over the state-of-the-art domain adaptation methods. Acknowledgements We thank Qianqian Ma from Boston University for her helpful theoretical insights and comments for our work.
1. What are the main contributions and novel aspects introduced by the paper regarding domain adaptation for 3D point cloud data? 2. How does the reviewer assess the strengths and weaknesses of the proposed approach, particularly in comparison with other baselines and prior works? 3. Do you have any concerns or questions regarding the experimental setup, ablation study, or results analysis presented in the paper? 4. How does the reviewer evaluate the significance and impact of the paper's findings on the research community, considering its potential applications and limitations?
Review
Review Originality: - L3: “to the best of our knowledge, there is no method yet to achieve domain adaptation on 3D data, especially point cloud data” see below [SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud, Wu et al 2018] proposes a domain adaptation pipeline for 3D lidar point cloud to reduce distribution gap between synthetic and real data. [Domain Adaptation for Vehicle Detection from Bird’s Eye View LiDAR Point Cloud Data, Saleh et al 2019] This is quite recent, but also explore domain adaptation for synthetic vs real data. Technically the above two are operating in image space (depth semantic segmentation maps, and BEV of point cloud, respectively) but the underlying goal is still to model 3D information from point cloud. The first paper is from 2018 so I do think this paper over claims this ‘first to do domain adaptation in 3D data’ statement a bit. Although it’s worth noting that this paper explores point based representation rather than image based, and for classification task rather than point segmentation. But I think the similarity and different should be mentioned and discussed. + The idea of locally aligning feature using self adaptive node with adaptive part based receptive field is pretty novel and interesting. This provides additional structure to the feature that would make the global alignment easier since it is invariance to scale and part configuration. But while this would work for classification, I’m not sure if it would work for other shape sensitive tasks such as 3D detection? - In fact, Table 3 shows that adding local alignment using self adaptive node doesn’t always lead to an improvement over other baselines on all the class. - Global alignment is a feature alignment from based on MCD from [23], so this part is not new. +/- I think it’s really useful to have a benchmark dataset for domain adaptation, and I appreciate the authors taking initiative and assemble such dataset. But since this is simply a subset of existing data, I don’t think this is a strong contribution (which to be fair, the authors never claim it is) Quality: + Extensive experiments and ablation study with detailed comparison with other UDA baselines. This is really useful especially since the proposed benchmark is new. + The ablation study shows that each component really do add to the performance (Table 2) ? Does the proposed approach with only global alignment equivalent to the MCD baseline from [23]? I assume so since there is no ablation study with only G and G is based on MCD, but are they all using the same setting, parameters, etc? + Performance break down per class in Table 3 is a nice touch. This is very useful since it shows strength and weakness of each approach. - All the scores in Table 3 (Avg) is lower than their Table 2 counterpart, which makes me wonder if the imbalance nature of the data across categories has more effect than it should be. Chair and table sort of dominate the dataset, and skew the final score toward the trend of these two classes. I feel like a more fair comparison is when all class has an equal number of object or when each class is weighted equally. I know that this is pretty common in classification task, but it can be misleading. ? The result for bed is very interesting and worth a discussion. MCD [23] outperform other methods by a large margin. And if we assume that the proposed approach with only G is the same as MCD, then adding local alignment drop the classification score from 26.1 to 4.3 (and adding attention further drop them to 1) Do you have any intuition on why this is the case? Clarity: + Overall the paper is not difficult to understand, + The format of the experiments, ablation study, and the tables to show the results are all very clear and easy to digest. ? I feel like section 3.5 doesn’t add much to the narrative and could be put in the supplementary instead. - It’s not immediately clear to me in line 90-91 that P(s) P(t) refers to the distribution (it’s defined in the next section) Significant: + I believe the idea of self adaptive node for 3D object would be useful to the research community, if it works. Aligning feature might not be new, but doing so in 3D setting and on top of PointNet based feature shows that it is possible and promising, at least for chair and table categories. + It’s true that not much works are looking into Domain Adaptation for 3D data, and it helps to have a common benchmark even if it just a combination of existing dataset. --UPDATED AFTER REBUTTAL-- Thanks for the detailed rebuttal. The additional results are quite interesting and further convince me that the proposed local alignment does help. So I'm keeping my score at 6.
NIPS
Title Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence Abstract Existing rotated object detectors are mostly inherited from the horizontal detection paradigm, as the latter has evolved into a well-developed area. However, these detectors are difficult to perform prominently in high-precision detection due to the limitation of current regression loss design, especially for objects with large aspect ratios. Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection. We show that one essential challenge is how to modulate the coupled parameters in the rotation regression loss, as such the estimated parameters can influence to each other during the dynamic joint optimization, in an adaptive and synergetic way. Specifically, we first convert the rotated bounding box into a 2-D Gaussian distribution, and then calculate the Kullback-Leibler Divergence (KLD) between the Gaussian distributions as the regression loss. By analyzing the gradient of each parameter, we show that KLD (and its derivatives) can dynamically adjust the parameter gradients according to the characteristics of the object. For instance, it will adjust the importance (gradient weight) of the angle parameter according to the aspect ratio. This mechanism can be vital for high-precision detection as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. More importantly, we have proved that KLD is scale invariant. We further show that the KLD loss can be degenerated into the popular ln-norm loss for horizontal detection. Experimental results on seven datasets using different detectors show its consistent superiority, and codes are available at https://github.com/yangxue0827/RotationDetection. 1 Introduction As a fundamental building block for visual analysis across aerial images, scene text etc., rotated object detection has recently been developed rapidly [1, 2, 3, 4, 5, 6], which benefit themselves from the well-established horizontal detection approaches [7, 8, 9, 10, 11]. Specifically, many works [12, 13, 14, 15] build themselves upon the previously established horizontal detection pipeline from an inductive perspective, as shown in Figure 1(a). However, these detectors are often unable to cope with challenging scenes well due to the limitations of current regression loss, such as large aspect ratio objects, dense scenes, etc., resulting in obvious disadvantages in high-precision detection. ∗Part of the work was done during an internship at Huawei Inc. †Correspondence author is Junchi Yan. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). (a) Previous methods follow the induction paradigm from special horizontal to general rotated detection. (b) Our proposed method adopts a deduction methodology from general rotated to special horizontal detection. Figure 1: Methodological road-map difference between horizontal detection (special case) and rotation detection (general case) in the previous methods [1, 12, 13, 14, 15] and the proposed method. In this paper, we take a step back, and aim to develop (from a deductive perspective) a unified regression framework for rotation detection and its special case: horizontal detection. In fact, our new framework enjoys a coherent property that it can be degenerated into the current commonly used regression loss (e.g. ln-norm) in special cases (horizontal detection), as shown in Figure 1(b). For a devising a rotation regression loss for high-precision rotation detection, one important observation is that the importance of different parameters to different types of objects can vary. For example, the angle parameter (θ) and the center point parameter (x, y) are important for large aspect ratio objects and small objects, respectively. In another word, it is conjectured that regression loss should be self-modulated during the learning process and calls for more dynamic optimization strategy. Inspired by the above ideas, we first convert the rotated bounding box B(x, y, h, w, θ) into a 2-D Gaussian distribution N (µ,Σ). As a standard distance metric, we then use the Kullback-Leibler Divergence (KLD) [16] to calculate the distribution distance between the predicted bounding box and ground truth as the regression loss. We compare KLD with Smooth L1 loss [7] and another distance metric, Gaussian Wasserstein Distance (GWD) [5, 17], and find that KLD has a more complete parameter optimization mechanism. In particular, by analyzing the gradient of the parameters during learning, we show that the optimization of one parameter will be affected by other parameters (as the gradient weight). It means that the model will adaptively adjust the optimization strategy given a specific configuration of an object for detection, as shown can lead to excellent performance in high-precision detection. In addition, KLD is proven scale invariant, which is an important property that Smooth L1 loss and GWD do not possess. As the horizontal bounding box is a special case of the rotated bounding box, we show that KLD can also be degenerated into the ln-norm loss as commonly used in existing horizontal detection pipeline. The highlights of this paper are four-folds: 1) Differing from the dominant existing practices that build rotation detectors heavily upon the horizontal detectors, we develop new rotation detection loss from scratch and show that it is coherent with existing horizontal detection protocol in its degenerated case for horizontal detection. 2) To achieve a more principled measurement between the prediction and ground truth, instead of computing the difference for each physically-meaningful parameter related to the bounding box which are in different scales and units, we innovatively convert the regression loss of rotation detection into the KLD of two 2-D Gaussian distributions, leading to a clean and coherent regression loss. 3) Through the gradient analysis of each parameter in KLD, we further find that the self-modulated optimization mechanism of KLD greatly promotes the improvement of high-precision detection, which verify the advantage of our loss design. More importantly, we have theoretically shown (in appendix) that KLD is scale invariant for detection, which is crucial for the rotation cases. 4) Extensive experimental results on seven public datasets and two popular detectors show the effectiveness of our approach, which achieves new state-of-the-art performance for rotation detection. The source codes [18] are made public available. 2 Background We first generally discuss the related works on both horizontal and rotated object detection. Then we summarize the current design paradigm of rotation regression loss from two kinds of methodologies, as shown in Figure 1: one is inductive that tries to develop the general rotation detection from the special and classic horizontal detection pipeline. While the other is deductive that aims to devise a general rotation detection pipeline with horizontal detection as its special case. 2.1 Related Works Horizontal object detection. Horizontal object detection which covers most existing detection literature, normally uses a horizontal bounding box to represent the object. The mainstream classical object detection algorithms can be roughly divided according to the following standards: Two[7, 8, 9, 11] or Single-stage [10, 19, 20] object detection, Anchor-free [21, 22, 23] or Anchor-based [8, 9, 10] object detection and CNN [8, 10, 21] or Transformer-based [24, 25] object detection. Although the pipelines may vary, the mainstream regression loss often uses the popular ln-norm loss (such as smooth L1 loss) or IoU-based loss (such as GIoU [26], and DIoU [27]). These abovementioned detectors have also been widely used in other scenarios and have achieved satisfactory performance. However, horizontal detectors do not provide accurate orientation and scale information. Rotated object detection. Recent advances in rotation detection [3, 4, 12, 14, 28] are mainly driven by adapting the horizontal object detectors with rotated bounding boxes to represent multi-oriented objects. To accurately predict the rotated bounding box, most rotation detection methods extend the ln-norm [12, 15, 29, 30, 31] used in horizontal detection, or construct a differentiable approximate IoU loss [3, 5, 32]. From scratch, we try to change the design of rotation regression loss from induction paradigm to deduction methodology, which in fact is a generalization to the horizontal case. In the following, we describe the existing works from the induction and deduction methodologies. 2.2 Inductive Thinking of Loss Design: from Special Horizon to General Rotation Detection Regression loss is a vital part of most current object detection algorithms. For horizontal bounding box regression, the model [7, 8, 9, 10, 11] mainly outputs four items for location and size: tpx = xp − xa wa , tpy = yp − ya ha , tpw = ln ( wp wa ) , tph = ln ( hp ha ) (1) to match the four targets from the ground truth ttx = xt − xa wa , tty = yt − ya ha , ttw = ln ( wt wa ) , tth = ln ( ht ha ) (2) where x, y, h, w denote the center coordinates, height and width, respectively. Variables xt, xa, xp are for the ground-truth box, anchor box, and predicted box, respectively (likewise for y, w, h). Extending the above horizontal case, existing rotation detection models [1, 12, 13, 14, 15] also use regression loss which simply involves an extra angle parameter θ: tpθ = f(θp − θa), t t θ = f(θt − θa) (3) where f(·) is used to deal with angular periodicity, such as trigonometric functions, modulo, etc. The overall regression loss for rotation detection is: Lreg = ln-norm (∆tx,∆ty,∆tw,∆th,∆tθ) (4) where ∆tx = tpx− ttx = ∆xwa , ∆ty = t p y− tty = ∆y ha , ∆tw = tpw− ttw = ln(wp/wt), ∆th = t p h− tth = ln(hp/ht), and ∆tθ = t p θ − ttθ = ∆θ. It can be seen that parameters are optimized independently, making the loss (or detection accuracy) sensitive to the under-fitting of any of the parameters. This mechanism is fatal to high-precision detection. Taking the left side of Figure 2 as an example, the detection result based on the Smooth L1 loss often shows the deviation of the center point or angle. Moreover, different types of objects have different sensitivity to these five parameters. For example, the angle parameter is very important for detecting objects with large aspect ratios. This requires to select an appropriate set of weights given a specific single object sample during the training, which is nontrivial or even unrealistic. 2.3 Deductive Thinking of Loss Design: from General Rotation to Special Horizon Detection To break the original inductive design paradigm, we adopt deductive paradigm to construct more accurate rotation regression loss. Here we rephrase the main idea in the recent work [5], which converts a arbitrary-oriented bounding box B(x, y, h, w, θ) into a 2-D Gaussian N (µ,Σ), as illustracted in Figure 3. Then the distance between two Gaussian is calculated as the final loss. Specifically, the conversion is: µ =(x, y)> Σ1/2 =RΛR> = ( cos θ − sin θ sin θ cos θ )( w 2 0 0 h 2 )( cos θ sin θ − sin θ cos θ ) = ( w 2 cos2 θ + h 2 sin2 θ w−h 2 cos θ sin θ w−h 2 cos θ sin θ w 2 sin2 θ + h 2 cos2 θ ) (5) where R represents the rotation matrix, and Λ represents the diagonal matrix of eigenvalues. The recent work [5] analyzes that the introduction of N (µ,Σ) can solve the inconsistency between metric and loss, boundary discontinuity and square-like problem. On this basis, we further study how to design high-precision detection regression loss through new parameter space. Our view is that the self-modulated mechanism is positively correlated with the final high-precision performance. Gaussian Wasserstein Distance. The Wasserstein distance [5, 17] between two probability measures Xp ∼ Np(µp,Σp) and Xt ∼ Nt(µt,Σt) expressed as: Dw(Np,Nt)2 = ‖µp − µt‖22︸ ︷︷ ︸ center distance + Tr(Σp + Σt − 2(Σ1/2p ΣtΣ1/2p )1/2)︸ ︷︷ ︸ coupling terms about hp ,wp and θp (6) Eq. 6 shows that the Gaussian Wasserstein Distance (GWD) is mainly divided into two parts: the distance between the center points (x, y) and the coupling terms about h, w and θ. Accordingly, the regression loss based on GWD can be regarded as a semi-coupled loss. Although GWD can greatly improve the performance of high-precision rotation detection due to the coupling between part of the parameters, the independent optimization of the center point make the detection result slightly shifted (see Figure 2). Note that GWD is not scale invariant, which is not detection friendly. When all the boxes are horizontal (θ = 0◦), Eq. 6 can be further simplified: Dhw(Np,Nt)2 =‖µp − µt‖22 + ‖Σ1/2p −Σ 1/2 t ‖ 2 F =(xp − xt)2 + (yp − yt)2 + ( (wp − wt)2 + (hp − ht)2 ) /4 =l2-norm(∆x,∆y,∆w/2,∆h/2) (7) where ‖ · ‖F is the Frobenius norm. Although Eq. 7 can still be used as the regression loss of horizontal detection, Eq. 4 and 7 are not completely consistent. Although GWD scheme has played a preliminary exploration of the deductive paradigm, it does not focus on achieving high-precision detection and scale invariance. In the following, we will propose our new approach based on the Kullback-Leibler divergence (KLD) [16]. 3 Proposed Approach Kullback-Leibler Divergence. To explore the more appropriate regression loss, we adopt the Kullback-Leibler divergence (KLD) [16]. Similarly, the KLD between two 2-D Gaussian is: Dkl(Np||Nt) = 1 2 (µp − µt)>Σ−1t (µp − µt)︸ ︷︷ ︸ term about xp and yp + 1 2 Tr(Σ−1t Σp) + 1 2 ln |Σt| |Σp|︸ ︷︷ ︸ coupling terms about hp ,wp and θp −1 (8) or Dkl(Nt||Np) = 1 2 (µp − µt)>Σ−1p (µp − µt) + 1 2 Tr(Σ−1p Σt) + 1 2 ln |Σp| |Σt|︸ ︷︷ ︸ chain coupling of all parameters −1 (9) It can be seen that each item in Dkl(Nt||Np) is composed of partial parameter coupling, which makes all parameters form a chain coupling relationship. In the optimization process of the KLD-based detector, the parameters influence each other and are jointly optimized which make optimization mechanism of the model is self-modulated. In contrast, Dkl(Np||Nt) and GWD are both semicoupled, but Dkl(Np||Nt) has a better central point optimization mechanism. Although KLD is asymmetric, we find that the optimization principles of these two forms are similar by analyzing the gradients of various parameters and experimental results. Take the relatively simple Dkl(Np||Nt) as an example, according to Eq. 5, each item of Eq. 8 can be expressed as (µp − µt)>Σ−1t (µp − µt) = 4 (∆x cos θt + ∆y sin θt) 2 w2t + 4 (∆y cos θt −∆x sin θt)2 h2t (10) Tr(Σ−1t Σp) = h2p w2t sin2 ∆θ + w2p h2t sin2 ∆θ + h2p h2t cos2 ∆θ + w2p w2t cos2 ∆θ (11) ln |Σt| |Σp| = ln h2t h2p + ln w2t w2p (12) where ∆x = xp − xt,∆y = yp − yt,∆θ = θp − θt. Analysis of high-precision detection. Without loss of generality, we set θt = 0◦, then ∂Dkl(µp) ∂µp = ( 4 w2t ∆x, 4 h2t ∆y )> (13) The weights 1/w2t and 1/h 2 t will make the model dynamically adjust the optimization of the object position according to the scale. For example, when the object scale is small or an edge is too short, the model will pay more attention to the optimization of the offset of the corresponding direction. For this kind of object, a slight deviation on the corresponding direction will often cause a sharp drop in IoU. When θt 6= 0◦, the gradient of the object offset (∆x and ∆y) will be dynamically adjusted according to the θt for better optimization. In contrast, the gradient of the center point in GWD and L2-norm are ∂Dw(µp) ∂µp = (2∆x, 2∆y)> and ∂L2(µp)∂µp = ( 2 w2a ∆x, 2h2a ∆y)>. The former cannot adjust the dynamic gradient according to the length and width of the object. The latter is based on the length and width of the anchor (wa, ha) to adjust the gradient instead of the target object (wt, ht), which is almost ineffective for those detectors [3, 13, 15, 28, 29, 33, 34] that use horizontal anchors for rotation detection. More importantly, they are not related to the angle of the target object. Therefore, the detection result of the GWD-based and Ln-norm models will show a slight deviation, while the detection result of the KLD-based model is quite accurate, as shown in Figure 2. For hp and wp, we have ∂Dkl(Σp) ∂ lnhp = h2p h2t cos2 ∆θ + h2p w2t sin2 ∆θ − 1, ∂Dkl(Σp) ∂ lnwp = w2p w2t cos2 ∆θ + w2p h2t sin2 ∆θ − 1 (14) On the one hand, the optimization of the hp and wp is affected by the ∆θ. When ∆θ = 0◦, ∂Dkl(Σp) ∂ lnhp = h2p h2t − 1, ∂Dkl(Σp)∂ lnwp = w2p w2t − 1, which means that the smaller targeted height or width leads to heavier penalty on its matching loss. This is desirable, as smaller height or width needs higher matching precision. On the other hand, the optimization of ∆θ is also affected by hp and wp: ∂Dkl(Σp) ∂θp = ( h2p − w2p w2t + w2p − h2p h2t ) sin 2∆θ (15) when wp = wt, hp = ht, then ∂Dkl(Σp) ∂θp = ( h2t w2t + w2t h2t − 2 ) sin 2∆θ ≥ sin 2∆θ, the condition for the equality sign is ht = wt. This shows that the larger the aspect ratio of the object, the model will pay more attention to the optimization of the angle. This is the main reason why the KLD-based model has a huge advantage in high-precision detection indicators as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. Through the above analysis, we find that when one of the parameters is optimized, the other parameters will be used as its weight to dynamically adjust the optimization rate. In other words, the optimization of parameters is no longer independent, that is, optimizing one parameter will also promote the optimization of other parameters. The optimization of this virtuous circle is the key to KLD as an excellent rotation regression loss. In addition, Dkl(Nt||Np) has similar properties, refer to appendix for details. Scale invariance. For a full-rank matrix M, |M| 6= 0, we have Dkl(Np||Nt) = Dkl(Np′ ||Nt′ ), where Xp′ = MXp ∼ Np(Mµp,MΣpM>), Xt′ = MXt ∼ Nt(Mµt,MΣtM>). Therefore, the affine invariance (including scale invariance when M = kI, where I denotes identity matrix) of KLD can be proven (see proof in appendix). Compared with Ln-norm and GWD, KLD is more suitable for replacing the non-differentiable rotated IoU loss for its consistency with detection metric. Horizontal special case. For horizontal detection, combine Eq. 8 to Eq. 12, we have Dhkl(Np||Nt) = 1 2 ( w2p w2t + h2p h2t + 4∆2x w2t + 4∆2y h2t + ln w2t w2p + ln h2t h2p − 2 ) =2l2-norm(∆tx,∆ty) + l1-norm(∆tw,∆th) + 1 2 l2-norm( 1 ∆tw , 1 ∆th )− 1 (16) where the first two terms of Eq. 16 are very similar to Eq. 4, and the divisor part of the two terms x and y is the main difference ( ∆xwt vs. ∆x wa ). Variants of KLD. We have also introduced some variants [35, 36] of KLD to further verify the influence of asymmetry on rotation detection can be ignored. The variants mainly including Dkl_min(max)(Np||Nt) = min(max) (Dkl(Np||Nt),Dkl(Nt||Np)) Djs(Np||Nt) = 1 2 ( Dkl ( Nt|| Np +Nt 2 ) + Dkl ( Np|| Np +Nt 2 )) Djef (Np||Nt) =Dkl(Nt||Np) + Dkl(Np||Nt) (17) Rotation regression loss. The whole training process of detector is as follows: i) predict offset (tpx, t p y, t p w, t p h, t p θ); ii) decode prediction box; iii) convert prediction box and target ground-truth into Gaussian distribution; iv) calculate KLD of two Gaussian distributions. Therefore, the inference time remains unchanged. We normalize the distance function as our final regression loss Lreg: Lreg = 1− 1 τ + f(D) , τ ≥ 1 (18) where f(·) denotes a non-linear function to transform the distance D to make the loss more smooth and expressive. In this paper, we mainly use two nonlinear functions, sqrt(D) and ln(D + 1). The hyperparameter τ modulates the entire loss. The multi-task loss is: L = λ1 Npos Npos∑ n=1 Lreg(bn, gtn) + λ2 N N∑ n=1 Lcls(pn, tn) (19) where Npos and N indicate the number of positive and all anchors. bn denotes the n-th bounding box, gtn is the n-th target ground-truth. tn denotes the label of n-th object, pn is the n-th probability distribution of various classes calculated by sigmoid function. The hyper-parameter λ1, λ2 control the trade-off and are set to {2, 1} by default. The classification loss Lcls is set as focal loss [10]. 4 Experiment 4.1 Datasets and Implementation Details Our experiments are conducted over a variety of datasets, including three large-scale public datasets for aerial images i.e. DOTA [37], UCAS-AOD [38], HRSC2016 [39], as well as scene text dataset ICDAR2015 [40], MLT [41] and MSRA-TD500 [42]. DOTA is one of the largest dataset for oriented object detection in aerial images with three released versions: DOTA-v1.0, DOTA-v1.5 and DOTA-v2.0. DOTA-v1.0 contains 15 common categories, 2,806 images and 188,282 instances. The proportions of the training set, validation set, and testing set in DOTA-v1.0 are 1/2, 1/6, and 1/3, respectively. In contrast, DOTA-v1.5 uses the same images as DOTA-v1.0, but extremely small instances (less than 10 pixels) are also annotated. Moreover, a new category, containing 402,089 instances in total is added in this version. While DOTA-v2.0 contains 18 common categories, 11,268 images and 1,793,658 instances. Compared to DOTA-v1.5, it further includes the new categories. The 11,268 images in DOTA-v2.0 are split into training, validation, test-dev, and test-challenge sets. We divide the images into 600× 600 subimages with an overlap of 150 pixels and scale it to 800× 800, in line with the cropping protocol in literature [5, 28]. UCAS-AOD contains 1,510 aerial images of approximately 659× 1, 280 pixels, with two categories of 14,596 instances in total. In line with [31, 37], we randomly select 1,110 for training and 400 for testing. HRSC2016 contains images from two scenarios including ships on sea and ships close inshore. The training, validation and test set include 436, 181 and 444 images. ICDAR2015, MLT and MSRA-TD500 are commonly used for oriented scene text detection and spotting. ICDAR2015 includes 1,000 training images and 500 testing images. ICDAR2017 MLT is a multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 testing images. MSRA-TD500 dataset consists of 300 training images and 200 testing images. We use Tensorflow [43] to implement the proposed methods on a server with Tesla V100 and 32G memory. The experiments are all initialized by ResNet50 [44] by default unless otherwise specified. Weight decay and momentum are set 0.0001 and 0.9, respectively. We employ MomentumOptimizer over 8 GPUs with a total of 8 images per minibatch (1 image per GPU). All the used datasets are trained by 20 epochs in total, and the learning rate is reduced tenfold at 12 epochs and 16 epochs, respectively. The initial learning rate is set to 5e-4. The number of image iterations per epoch for DOTA-v1.0, DOTA-v1.5, DOTA-v2.0, UCAS-AOD, HRSC2016, ICDAR2015, MLT and MSRA-TD500 are 54k, 64k, 80k, 5k, 10k, 10k, 10k and 5k respectively, and doubled if data augmentation (include random rotation, flipping, and graying) or multi-scale training is used. 4.2 Ablation Study and Further Comparison Regression loss form and hyperparameter. Table 1 compares three forms of KLD-based regression loss on HRSC2016, including Dkl, f(Dkl) and Lreg(f(Dkl), τ). Due to extreme sensitivity to large errors, the performance of Dkl is extremely poor, only 0.20%. Through a simple nonlinear linear transformation, the performance can be increased to 82.96% and 83.23% corresponding to sqrt and log. We further perform a detailed hyperparameter experiment on the loss Lreg proposed in this paper, and the performance reaches the optimal when τ = 1, f(Dkl) = log(Dkl + 1), about 85.25%. Keeping the same loss pattern, we compare six KLD-based distance functions in Table 2, and conclude that the asymmetry of KLD does not have much impact on performance. In subsequent experiments, we use Lreg(log(Dkl(Np||Nt)), 1) as the basic setting. Ablation study of normalization. As mentioned above, the use of Eq. 18 is to smooth its excessively rapid growth trend and play a role of normalization. This extra normalization questions if the KLD is actually contributing or simply produces noise in the results. In order to further prove that our method is indeed effective, we also perform a normalization operation on the Smooth L1 loss to eliminate the interference caused by normalization. As shown in Table 3, there is a significant drop in performance after using the normalization. The above experimental results prove that the effectiveness of KLD does not come from Eq. 18. High-precision detection experiment. We expect that the designed rotation regression loss can show advantages in high-precision detection. Table 4 shows the comparison of the high-precision detection results of three different regression losses using Smooth L1, GWD and KLD on different datasets and different detectors. For the HRSC2016 dataset containing a large number of ship with large aspect ratios, GWD-based RetinaNet has a 11.89% improvement over Smooth L1 on AP75, KLD even gets a 23.97% gain. Even with a stronger R3Det detector, KLD and GWD still increased by 33.96% and 22.46% in AP75, and 15.22% and 9.89% in AP50:95. The same experimental conclusion are also reflected in the other two scene text datasets MASR-TF500 and ICDAR2015, which is KLD > GWD > Smooth L1. In general, the self-modulation optimization mechanism has a significant help for high-precision detection. For a more intuitive comparison, we visually compare these three regression losses, as shown in Figure 2. Since the center point (x, y) parameters in Smooth L1 Loss and GWD are independently optimized, their prediction results are slightly shifted. In contrast, the KLD-based prediction results are closer to the object boundary and show strong robustness in dense scenes. Similarly, GWD-based or KLD-based model has more accurate angle prediction capabilities than Smooth L1-based model due to their angle parameters (θ) are not independently optimized. Ablation study on more datasets. To make the results more credible, we continue to verify on the other five datasets, as shown in Table 5. The improvement of KLD on the three data sets of MLT, UCAS-AOD and DOTA-v1.0 is still considerable, with an increase of 9.17%, 1.58%, and 5.55% respectively. Note that for DOTA-v1.5 and DOTA-v2.0, which contain a large number of small objects (less than 10 pixels), KLD has achieved significant gains of 3.63% and 3.53%. Comparison of peer methods. Table 6 compares the six peer techniques, including IoU-Smooth L1 Loss [3], Modulated loss [45], RIL [34], CSL [4, 47], DCL [46], and GWD [5] on DOTA-v1.0. For fairness, these methods are all implemented on the same baseline method, and are trained and tested under the same environment and hyperparameters. We detail the accuracy of the seven categories, including large aspect ratio (e.g. BR, SV, LV, SH, HA) and square-like object (e.g. ST, RD), which can better reflect the real-world challenges and advantages of our method. Without bells and whistles, the combination of RetinaNet and KLD directly surpasses R3Det (71.28% vs. 70.66% in AP50 and 69.41% vs. 68.31% in 7-AP50). Even combined with R3Det, KLD can still further improve performance of the large aspect ratio object (2.82% in 7-AP50) and high-precision detection (6.07% in AP75 and 3.65% AP50:95). KLD-based method shows the best performer in almost all indicators. Similar conclusions can still be drawn on the more challenging datasets (DOTA-v1.5 and DOTA-v2.0), which contain more data and tiny object (less than 10 pixels). Horizontal detection verification. As analyzed by Eq. 16, KLD can be degenerated into the common regression loss in horizontal detection task. Table 7 compares the regression loss Smooth L1 and IoU/GIoU for horizontal detection with the proposed regression loss KLD on MS COCO [48] dataset. The results show that our KLD is not worse than other losses on the Faster RCNN [8], RetinaNet [10] and FCOS [21], and even has an improvement of 0.6% on RetinaNet. The ground truth for rotation detection is the minimum circumscribed rectangle, which means that ground truth can well reflect the true scale and direction information of the object. The “horizontal special case” described in this paper also meets the above requirements, the horizontal circumscribed rectangle is equal to the minimum circumscribed rectangle at this time. Although the ground truth of the COCO is a horizontal box, it is not the minimum circumscribed rectangle, which means that it loses the direction information and accurate scale information of the object. For example, a baseball bat placed obliquely in the image, the height and width of its horizontal circumscribed rectangle do not represent the height and width of the object itself. This causes that when KLD is applied to the COCO, the optimization mechanism of KLD that dynamically adjusts the angle gradient according to the aspect ratio is meaningless, which affects the improvement of the final performance. In general, this is a defect in the dataset annotation itself, not that KLD is not good enough. In fact, it is inappropriate to use the COCO to discuss θ = 0◦, because the COCO discards θ parameter. In addition, θ = 0◦ describes the instances in the horizontal position, but not mean all instances of the dataset are in a horizontal position. This paper uses COCO to discuss the “horizontal special case” to express that even if the dataset has certain labeling defects, KLD can have certain effects. After all, it is difficult to observe the performance improvement of all horizontal objects on the rotating dataset. 4.3 Comparisons with the State-of-the-Art Methods The evaluation is performed on the DOTA, which contains a considerable number of categories, complexity scenes. Our single-scale model RetinaNet-KLD-R50 and R3Det-KLD-R50 achieve 75.28% and 77.36% respectively. They outperform multi-scale models as shown in Table 8. With large backbone and multi-scale testing, our method further achieves state-of-the-art accuracy 80.63%. 5 Discussions Limitations. Despite the theoretical grounds and the promising experimental justifications, our method has an obvious limitation that it cannot be directly applied to quadrilateral detection [34, 45]. Potential negative societal impacts. Our findings provides a simple regression loss for highprecision rotation detection. However, our research may be applied to some sensitive fields, such as remote sensing, aviation, and unmanned aerial vehicles. Conclusion. Departure from the vast existing literature in object detection, in this paper we have designed a new regression loss for rotation detection from scratch and consider the popular horizontal detection as its special case. Specifically, we calculate the KLD between the Gaussian distributions corresponding to the rotated bounding box as the regression loss, and we find that in the learning procedure guided by the KLD loss, the gradient of the parameters can be dynamically adjusted according to the characteristics of the object which is a desirable property for robust object detection, regardless its rotation, size and aspect ratio etc. We also proved that KLD has scale invariance, which is crucial for detection tasks. Interestingly, we have shown that KLD can be degenerated into the currently commonly used ln-norm loss in the horizontal detection task. Extensive experimental results across different detectors and datasets show the effectiveness of our approach. Acknowledgments and Disclosure of Funding This work was partly supported by National Key Research and Development Program of China (2018AAA0100704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and NSFC (U20B2068, 72061127003, 61972250). Xue Yang was also partly supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University, Shanghai, China.
1. What is the focus of the paper regarding oriented object detection? 2. What is the difference between the proposed approach and previous works, particularly in terms of the regression loss function? 3. Do you have any concerns or criticisms about the proposed method, especially its novelty and performance? 4. How does the reviewer assess the significance of the work compared to prior research? 5. Are there any questions or suggestions for future work related to this topic?
Summary Of The Paper Review
Summary Of The Paper This paper presents a new design of the regression loss function for oriented object detection. When modeling the rotated bounding boxes as 2D Gaussian distributions, the authors introduced the Kullback-Leibler Divergence for oriented bounding boxes regression as an alternative to the commonly-used Smooth-l1 loss function. The experimental results show that the proposed loss function could obtain better performance on several benchmarks. Review This paper extended the idea of modeling the oriented bounding boxes as Gaussian distributions [1]. The main difference between this paper and [1] is using the KL Divergence instead of the Gaussian Wasserstein Distance as the regression loss function for the learning of oriented bounding boxes. Although better performance could be obtained, this paper did not present insights for the reason why using KL Divergence. Accordingly, this paper is actually another attempt for finding a better-performing distance function under the framework of Gaussian modeling of oriented object detection. Without further exploiting the insights behind the proposed KLD loss, the main contributions claimed about the novelty of the proposed loss function by this paper are not that significant. The authors discussed the self-modulated mechanism of the KLD loss, however, it is only a trivial observation. As the final evaluation metric is based on the IoU computation between two bounding boxes in the 2D Euclidean space, the more strict requirement for the objects with a larger aspect ratio should be taken into account for the sake of better IoU. A feasible solution is improving the weight for learning large aspect-ratio objects. From another perspective, the proposed KLD loss couples the orientation and the width-height estimation together. As shown in equation (10), the residual in x and y directions are rotated by a rotation matrix R ( θ ) ∈ SO ( 2 ) , which is actually a conversion from the polar coordinate system to the Euclidean one. The KLD loss only addressed the inaccurate angular regression of the oriented object detection for the larger aspect-ratio objects. For the detection of the horizontal bounding boxes, the problem of inaccurate regression for the width and height still remains. References [1] X. Yang, J. Yan, M. Qi, W. Wang, Z. Xiaopeng, and T. Qi, “Rethinking rotated object detection with Gaussian Wasserstein distance loss,” in International Conference on Machine Learning, 2021.
NIPS
Title Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence Abstract Existing rotated object detectors are mostly inherited from the horizontal detection paradigm, as the latter has evolved into a well-developed area. However, these detectors are difficult to perform prominently in high-precision detection due to the limitation of current regression loss design, especially for objects with large aspect ratios. Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection. We show that one essential challenge is how to modulate the coupled parameters in the rotation regression loss, as such the estimated parameters can influence to each other during the dynamic joint optimization, in an adaptive and synergetic way. Specifically, we first convert the rotated bounding box into a 2-D Gaussian distribution, and then calculate the Kullback-Leibler Divergence (KLD) between the Gaussian distributions as the regression loss. By analyzing the gradient of each parameter, we show that KLD (and its derivatives) can dynamically adjust the parameter gradients according to the characteristics of the object. For instance, it will adjust the importance (gradient weight) of the angle parameter according to the aspect ratio. This mechanism can be vital for high-precision detection as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. More importantly, we have proved that KLD is scale invariant. We further show that the KLD loss can be degenerated into the popular ln-norm loss for horizontal detection. Experimental results on seven datasets using different detectors show its consistent superiority, and codes are available at https://github.com/yangxue0827/RotationDetection. 1 Introduction As a fundamental building block for visual analysis across aerial images, scene text etc., rotated object detection has recently been developed rapidly [1, 2, 3, 4, 5, 6], which benefit themselves from the well-established horizontal detection approaches [7, 8, 9, 10, 11]. Specifically, many works [12, 13, 14, 15] build themselves upon the previously established horizontal detection pipeline from an inductive perspective, as shown in Figure 1(a). However, these detectors are often unable to cope with challenging scenes well due to the limitations of current regression loss, such as large aspect ratio objects, dense scenes, etc., resulting in obvious disadvantages in high-precision detection. ∗Part of the work was done during an internship at Huawei Inc. †Correspondence author is Junchi Yan. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). (a) Previous methods follow the induction paradigm from special horizontal to general rotated detection. (b) Our proposed method adopts a deduction methodology from general rotated to special horizontal detection. Figure 1: Methodological road-map difference between horizontal detection (special case) and rotation detection (general case) in the previous methods [1, 12, 13, 14, 15] and the proposed method. In this paper, we take a step back, and aim to develop (from a deductive perspective) a unified regression framework for rotation detection and its special case: horizontal detection. In fact, our new framework enjoys a coherent property that it can be degenerated into the current commonly used regression loss (e.g. ln-norm) in special cases (horizontal detection), as shown in Figure 1(b). For a devising a rotation regression loss for high-precision rotation detection, one important observation is that the importance of different parameters to different types of objects can vary. For example, the angle parameter (θ) and the center point parameter (x, y) are important for large aspect ratio objects and small objects, respectively. In another word, it is conjectured that regression loss should be self-modulated during the learning process and calls for more dynamic optimization strategy. Inspired by the above ideas, we first convert the rotated bounding box B(x, y, h, w, θ) into a 2-D Gaussian distribution N (µ,Σ). As a standard distance metric, we then use the Kullback-Leibler Divergence (KLD) [16] to calculate the distribution distance between the predicted bounding box and ground truth as the regression loss. We compare KLD with Smooth L1 loss [7] and another distance metric, Gaussian Wasserstein Distance (GWD) [5, 17], and find that KLD has a more complete parameter optimization mechanism. In particular, by analyzing the gradient of the parameters during learning, we show that the optimization of one parameter will be affected by other parameters (as the gradient weight). It means that the model will adaptively adjust the optimization strategy given a specific configuration of an object for detection, as shown can lead to excellent performance in high-precision detection. In addition, KLD is proven scale invariant, which is an important property that Smooth L1 loss and GWD do not possess. As the horizontal bounding box is a special case of the rotated bounding box, we show that KLD can also be degenerated into the ln-norm loss as commonly used in existing horizontal detection pipeline. The highlights of this paper are four-folds: 1) Differing from the dominant existing practices that build rotation detectors heavily upon the horizontal detectors, we develop new rotation detection loss from scratch and show that it is coherent with existing horizontal detection protocol in its degenerated case for horizontal detection. 2) To achieve a more principled measurement between the prediction and ground truth, instead of computing the difference for each physically-meaningful parameter related to the bounding box which are in different scales and units, we innovatively convert the regression loss of rotation detection into the KLD of two 2-D Gaussian distributions, leading to a clean and coherent regression loss. 3) Through the gradient analysis of each parameter in KLD, we further find that the self-modulated optimization mechanism of KLD greatly promotes the improvement of high-precision detection, which verify the advantage of our loss design. More importantly, we have theoretically shown (in appendix) that KLD is scale invariant for detection, which is crucial for the rotation cases. 4) Extensive experimental results on seven public datasets and two popular detectors show the effectiveness of our approach, which achieves new state-of-the-art performance for rotation detection. The source codes [18] are made public available. 2 Background We first generally discuss the related works on both horizontal and rotated object detection. Then we summarize the current design paradigm of rotation regression loss from two kinds of methodologies, as shown in Figure 1: one is inductive that tries to develop the general rotation detection from the special and classic horizontal detection pipeline. While the other is deductive that aims to devise a general rotation detection pipeline with horizontal detection as its special case. 2.1 Related Works Horizontal object detection. Horizontal object detection which covers most existing detection literature, normally uses a horizontal bounding box to represent the object. The mainstream classical object detection algorithms can be roughly divided according to the following standards: Two[7, 8, 9, 11] or Single-stage [10, 19, 20] object detection, Anchor-free [21, 22, 23] or Anchor-based [8, 9, 10] object detection and CNN [8, 10, 21] or Transformer-based [24, 25] object detection. Although the pipelines may vary, the mainstream regression loss often uses the popular ln-norm loss (such as smooth L1 loss) or IoU-based loss (such as GIoU [26], and DIoU [27]). These abovementioned detectors have also been widely used in other scenarios and have achieved satisfactory performance. However, horizontal detectors do not provide accurate orientation and scale information. Rotated object detection. Recent advances in rotation detection [3, 4, 12, 14, 28] are mainly driven by adapting the horizontal object detectors with rotated bounding boxes to represent multi-oriented objects. To accurately predict the rotated bounding box, most rotation detection methods extend the ln-norm [12, 15, 29, 30, 31] used in horizontal detection, or construct a differentiable approximate IoU loss [3, 5, 32]. From scratch, we try to change the design of rotation regression loss from induction paradigm to deduction methodology, which in fact is a generalization to the horizontal case. In the following, we describe the existing works from the induction and deduction methodologies. 2.2 Inductive Thinking of Loss Design: from Special Horizon to General Rotation Detection Regression loss is a vital part of most current object detection algorithms. For horizontal bounding box regression, the model [7, 8, 9, 10, 11] mainly outputs four items for location and size: tpx = xp − xa wa , tpy = yp − ya ha , tpw = ln ( wp wa ) , tph = ln ( hp ha ) (1) to match the four targets from the ground truth ttx = xt − xa wa , tty = yt − ya ha , ttw = ln ( wt wa ) , tth = ln ( ht ha ) (2) where x, y, h, w denote the center coordinates, height and width, respectively. Variables xt, xa, xp are for the ground-truth box, anchor box, and predicted box, respectively (likewise for y, w, h). Extending the above horizontal case, existing rotation detection models [1, 12, 13, 14, 15] also use regression loss which simply involves an extra angle parameter θ: tpθ = f(θp − θa), t t θ = f(θt − θa) (3) where f(·) is used to deal with angular periodicity, such as trigonometric functions, modulo, etc. The overall regression loss for rotation detection is: Lreg = ln-norm (∆tx,∆ty,∆tw,∆th,∆tθ) (4) where ∆tx = tpx− ttx = ∆xwa , ∆ty = t p y− tty = ∆y ha , ∆tw = tpw− ttw = ln(wp/wt), ∆th = t p h− tth = ln(hp/ht), and ∆tθ = t p θ − ttθ = ∆θ. It can be seen that parameters are optimized independently, making the loss (or detection accuracy) sensitive to the under-fitting of any of the parameters. This mechanism is fatal to high-precision detection. Taking the left side of Figure 2 as an example, the detection result based on the Smooth L1 loss often shows the deviation of the center point or angle. Moreover, different types of objects have different sensitivity to these five parameters. For example, the angle parameter is very important for detecting objects with large aspect ratios. This requires to select an appropriate set of weights given a specific single object sample during the training, which is nontrivial or even unrealistic. 2.3 Deductive Thinking of Loss Design: from General Rotation to Special Horizon Detection To break the original inductive design paradigm, we adopt deductive paradigm to construct more accurate rotation regression loss. Here we rephrase the main idea in the recent work [5], which converts a arbitrary-oriented bounding box B(x, y, h, w, θ) into a 2-D Gaussian N (µ,Σ), as illustracted in Figure 3. Then the distance between two Gaussian is calculated as the final loss. Specifically, the conversion is: µ =(x, y)> Σ1/2 =RΛR> = ( cos θ − sin θ sin θ cos θ )( w 2 0 0 h 2 )( cos θ sin θ − sin θ cos θ ) = ( w 2 cos2 θ + h 2 sin2 θ w−h 2 cos θ sin θ w−h 2 cos θ sin θ w 2 sin2 θ + h 2 cos2 θ ) (5) where R represents the rotation matrix, and Λ represents the diagonal matrix of eigenvalues. The recent work [5] analyzes that the introduction of N (µ,Σ) can solve the inconsistency between metric and loss, boundary discontinuity and square-like problem. On this basis, we further study how to design high-precision detection regression loss through new parameter space. Our view is that the self-modulated mechanism is positively correlated with the final high-precision performance. Gaussian Wasserstein Distance. The Wasserstein distance [5, 17] between two probability measures Xp ∼ Np(µp,Σp) and Xt ∼ Nt(µt,Σt) expressed as: Dw(Np,Nt)2 = ‖µp − µt‖22︸ ︷︷ ︸ center distance + Tr(Σp + Σt − 2(Σ1/2p ΣtΣ1/2p )1/2)︸ ︷︷ ︸ coupling terms about hp ,wp and θp (6) Eq. 6 shows that the Gaussian Wasserstein Distance (GWD) is mainly divided into two parts: the distance between the center points (x, y) and the coupling terms about h, w and θ. Accordingly, the regression loss based on GWD can be regarded as a semi-coupled loss. Although GWD can greatly improve the performance of high-precision rotation detection due to the coupling between part of the parameters, the independent optimization of the center point make the detection result slightly shifted (see Figure 2). Note that GWD is not scale invariant, which is not detection friendly. When all the boxes are horizontal (θ = 0◦), Eq. 6 can be further simplified: Dhw(Np,Nt)2 =‖µp − µt‖22 + ‖Σ1/2p −Σ 1/2 t ‖ 2 F =(xp − xt)2 + (yp − yt)2 + ( (wp − wt)2 + (hp − ht)2 ) /4 =l2-norm(∆x,∆y,∆w/2,∆h/2) (7) where ‖ · ‖F is the Frobenius norm. Although Eq. 7 can still be used as the regression loss of horizontal detection, Eq. 4 and 7 are not completely consistent. Although GWD scheme has played a preliminary exploration of the deductive paradigm, it does not focus on achieving high-precision detection and scale invariance. In the following, we will propose our new approach based on the Kullback-Leibler divergence (KLD) [16]. 3 Proposed Approach Kullback-Leibler Divergence. To explore the more appropriate regression loss, we adopt the Kullback-Leibler divergence (KLD) [16]. Similarly, the KLD between two 2-D Gaussian is: Dkl(Np||Nt) = 1 2 (µp − µt)>Σ−1t (µp − µt)︸ ︷︷ ︸ term about xp and yp + 1 2 Tr(Σ−1t Σp) + 1 2 ln |Σt| |Σp|︸ ︷︷ ︸ coupling terms about hp ,wp and θp −1 (8) or Dkl(Nt||Np) = 1 2 (µp − µt)>Σ−1p (µp − µt) + 1 2 Tr(Σ−1p Σt) + 1 2 ln |Σp| |Σt|︸ ︷︷ ︸ chain coupling of all parameters −1 (9) It can be seen that each item in Dkl(Nt||Np) is composed of partial parameter coupling, which makes all parameters form a chain coupling relationship. In the optimization process of the KLD-based detector, the parameters influence each other and are jointly optimized which make optimization mechanism of the model is self-modulated. In contrast, Dkl(Np||Nt) and GWD are both semicoupled, but Dkl(Np||Nt) has a better central point optimization mechanism. Although KLD is asymmetric, we find that the optimization principles of these two forms are similar by analyzing the gradients of various parameters and experimental results. Take the relatively simple Dkl(Np||Nt) as an example, according to Eq. 5, each item of Eq. 8 can be expressed as (µp − µt)>Σ−1t (µp − µt) = 4 (∆x cos θt + ∆y sin θt) 2 w2t + 4 (∆y cos θt −∆x sin θt)2 h2t (10) Tr(Σ−1t Σp) = h2p w2t sin2 ∆θ + w2p h2t sin2 ∆θ + h2p h2t cos2 ∆θ + w2p w2t cos2 ∆θ (11) ln |Σt| |Σp| = ln h2t h2p + ln w2t w2p (12) where ∆x = xp − xt,∆y = yp − yt,∆θ = θp − θt. Analysis of high-precision detection. Without loss of generality, we set θt = 0◦, then ∂Dkl(µp) ∂µp = ( 4 w2t ∆x, 4 h2t ∆y )> (13) The weights 1/w2t and 1/h 2 t will make the model dynamically adjust the optimization of the object position according to the scale. For example, when the object scale is small or an edge is too short, the model will pay more attention to the optimization of the offset of the corresponding direction. For this kind of object, a slight deviation on the corresponding direction will often cause a sharp drop in IoU. When θt 6= 0◦, the gradient of the object offset (∆x and ∆y) will be dynamically adjusted according to the θt for better optimization. In contrast, the gradient of the center point in GWD and L2-norm are ∂Dw(µp) ∂µp = (2∆x, 2∆y)> and ∂L2(µp)∂µp = ( 2 w2a ∆x, 2h2a ∆y)>. The former cannot adjust the dynamic gradient according to the length and width of the object. The latter is based on the length and width of the anchor (wa, ha) to adjust the gradient instead of the target object (wt, ht), which is almost ineffective for those detectors [3, 13, 15, 28, 29, 33, 34] that use horizontal anchors for rotation detection. More importantly, they are not related to the angle of the target object. Therefore, the detection result of the GWD-based and Ln-norm models will show a slight deviation, while the detection result of the KLD-based model is quite accurate, as shown in Figure 2. For hp and wp, we have ∂Dkl(Σp) ∂ lnhp = h2p h2t cos2 ∆θ + h2p w2t sin2 ∆θ − 1, ∂Dkl(Σp) ∂ lnwp = w2p w2t cos2 ∆θ + w2p h2t sin2 ∆θ − 1 (14) On the one hand, the optimization of the hp and wp is affected by the ∆θ. When ∆θ = 0◦, ∂Dkl(Σp) ∂ lnhp = h2p h2t − 1, ∂Dkl(Σp)∂ lnwp = w2p w2t − 1, which means that the smaller targeted height or width leads to heavier penalty on its matching loss. This is desirable, as smaller height or width needs higher matching precision. On the other hand, the optimization of ∆θ is also affected by hp and wp: ∂Dkl(Σp) ∂θp = ( h2p − w2p w2t + w2p − h2p h2t ) sin 2∆θ (15) when wp = wt, hp = ht, then ∂Dkl(Σp) ∂θp = ( h2t w2t + w2t h2t − 2 ) sin 2∆θ ≥ sin 2∆θ, the condition for the equality sign is ht = wt. This shows that the larger the aspect ratio of the object, the model will pay more attention to the optimization of the angle. This is the main reason why the KLD-based model has a huge advantage in high-precision detection indicators as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. Through the above analysis, we find that when one of the parameters is optimized, the other parameters will be used as its weight to dynamically adjust the optimization rate. In other words, the optimization of parameters is no longer independent, that is, optimizing one parameter will also promote the optimization of other parameters. The optimization of this virtuous circle is the key to KLD as an excellent rotation regression loss. In addition, Dkl(Nt||Np) has similar properties, refer to appendix for details. Scale invariance. For a full-rank matrix M, |M| 6= 0, we have Dkl(Np||Nt) = Dkl(Np′ ||Nt′ ), where Xp′ = MXp ∼ Np(Mµp,MΣpM>), Xt′ = MXt ∼ Nt(Mµt,MΣtM>). Therefore, the affine invariance (including scale invariance when M = kI, where I denotes identity matrix) of KLD can be proven (see proof in appendix). Compared with Ln-norm and GWD, KLD is more suitable for replacing the non-differentiable rotated IoU loss for its consistency with detection metric. Horizontal special case. For horizontal detection, combine Eq. 8 to Eq. 12, we have Dhkl(Np||Nt) = 1 2 ( w2p w2t + h2p h2t + 4∆2x w2t + 4∆2y h2t + ln w2t w2p + ln h2t h2p − 2 ) =2l2-norm(∆tx,∆ty) + l1-norm(∆tw,∆th) + 1 2 l2-norm( 1 ∆tw , 1 ∆th )− 1 (16) where the first two terms of Eq. 16 are very similar to Eq. 4, and the divisor part of the two terms x and y is the main difference ( ∆xwt vs. ∆x wa ). Variants of KLD. We have also introduced some variants [35, 36] of KLD to further verify the influence of asymmetry on rotation detection can be ignored. The variants mainly including Dkl_min(max)(Np||Nt) = min(max) (Dkl(Np||Nt),Dkl(Nt||Np)) Djs(Np||Nt) = 1 2 ( Dkl ( Nt|| Np +Nt 2 ) + Dkl ( Np|| Np +Nt 2 )) Djef (Np||Nt) =Dkl(Nt||Np) + Dkl(Np||Nt) (17) Rotation regression loss. The whole training process of detector is as follows: i) predict offset (tpx, t p y, t p w, t p h, t p θ); ii) decode prediction box; iii) convert prediction box and target ground-truth into Gaussian distribution; iv) calculate KLD of two Gaussian distributions. Therefore, the inference time remains unchanged. We normalize the distance function as our final regression loss Lreg: Lreg = 1− 1 τ + f(D) , τ ≥ 1 (18) where f(·) denotes a non-linear function to transform the distance D to make the loss more smooth and expressive. In this paper, we mainly use two nonlinear functions, sqrt(D) and ln(D + 1). The hyperparameter τ modulates the entire loss. The multi-task loss is: L = λ1 Npos Npos∑ n=1 Lreg(bn, gtn) + λ2 N N∑ n=1 Lcls(pn, tn) (19) where Npos and N indicate the number of positive and all anchors. bn denotes the n-th bounding box, gtn is the n-th target ground-truth. tn denotes the label of n-th object, pn is the n-th probability distribution of various classes calculated by sigmoid function. The hyper-parameter λ1, λ2 control the trade-off and are set to {2, 1} by default. The classification loss Lcls is set as focal loss [10]. 4 Experiment 4.1 Datasets and Implementation Details Our experiments are conducted over a variety of datasets, including three large-scale public datasets for aerial images i.e. DOTA [37], UCAS-AOD [38], HRSC2016 [39], as well as scene text dataset ICDAR2015 [40], MLT [41] and MSRA-TD500 [42]. DOTA is one of the largest dataset for oriented object detection in aerial images with three released versions: DOTA-v1.0, DOTA-v1.5 and DOTA-v2.0. DOTA-v1.0 contains 15 common categories, 2,806 images and 188,282 instances. The proportions of the training set, validation set, and testing set in DOTA-v1.0 are 1/2, 1/6, and 1/3, respectively. In contrast, DOTA-v1.5 uses the same images as DOTA-v1.0, but extremely small instances (less than 10 pixels) are also annotated. Moreover, a new category, containing 402,089 instances in total is added in this version. While DOTA-v2.0 contains 18 common categories, 11,268 images and 1,793,658 instances. Compared to DOTA-v1.5, it further includes the new categories. The 11,268 images in DOTA-v2.0 are split into training, validation, test-dev, and test-challenge sets. We divide the images into 600× 600 subimages with an overlap of 150 pixels and scale it to 800× 800, in line with the cropping protocol in literature [5, 28]. UCAS-AOD contains 1,510 aerial images of approximately 659× 1, 280 pixels, with two categories of 14,596 instances in total. In line with [31, 37], we randomly select 1,110 for training and 400 for testing. HRSC2016 contains images from two scenarios including ships on sea and ships close inshore. The training, validation and test set include 436, 181 and 444 images. ICDAR2015, MLT and MSRA-TD500 are commonly used for oriented scene text detection and spotting. ICDAR2015 includes 1,000 training images and 500 testing images. ICDAR2017 MLT is a multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 testing images. MSRA-TD500 dataset consists of 300 training images and 200 testing images. We use Tensorflow [43] to implement the proposed methods on a server with Tesla V100 and 32G memory. The experiments are all initialized by ResNet50 [44] by default unless otherwise specified. Weight decay and momentum are set 0.0001 and 0.9, respectively. We employ MomentumOptimizer over 8 GPUs with a total of 8 images per minibatch (1 image per GPU). All the used datasets are trained by 20 epochs in total, and the learning rate is reduced tenfold at 12 epochs and 16 epochs, respectively. The initial learning rate is set to 5e-4. The number of image iterations per epoch for DOTA-v1.0, DOTA-v1.5, DOTA-v2.0, UCAS-AOD, HRSC2016, ICDAR2015, MLT and MSRA-TD500 are 54k, 64k, 80k, 5k, 10k, 10k, 10k and 5k respectively, and doubled if data augmentation (include random rotation, flipping, and graying) or multi-scale training is used. 4.2 Ablation Study and Further Comparison Regression loss form and hyperparameter. Table 1 compares three forms of KLD-based regression loss on HRSC2016, including Dkl, f(Dkl) and Lreg(f(Dkl), τ). Due to extreme sensitivity to large errors, the performance of Dkl is extremely poor, only 0.20%. Through a simple nonlinear linear transformation, the performance can be increased to 82.96% and 83.23% corresponding to sqrt and log. We further perform a detailed hyperparameter experiment on the loss Lreg proposed in this paper, and the performance reaches the optimal when τ = 1, f(Dkl) = log(Dkl + 1), about 85.25%. Keeping the same loss pattern, we compare six KLD-based distance functions in Table 2, and conclude that the asymmetry of KLD does not have much impact on performance. In subsequent experiments, we use Lreg(log(Dkl(Np||Nt)), 1) as the basic setting. Ablation study of normalization. As mentioned above, the use of Eq. 18 is to smooth its excessively rapid growth trend and play a role of normalization. This extra normalization questions if the KLD is actually contributing or simply produces noise in the results. In order to further prove that our method is indeed effective, we also perform a normalization operation on the Smooth L1 loss to eliminate the interference caused by normalization. As shown in Table 3, there is a significant drop in performance after using the normalization. The above experimental results prove that the effectiveness of KLD does not come from Eq. 18. High-precision detection experiment. We expect that the designed rotation regression loss can show advantages in high-precision detection. Table 4 shows the comparison of the high-precision detection results of three different regression losses using Smooth L1, GWD and KLD on different datasets and different detectors. For the HRSC2016 dataset containing a large number of ship with large aspect ratios, GWD-based RetinaNet has a 11.89% improvement over Smooth L1 on AP75, KLD even gets a 23.97% gain. Even with a stronger R3Det detector, KLD and GWD still increased by 33.96% and 22.46% in AP75, and 15.22% and 9.89% in AP50:95. The same experimental conclusion are also reflected in the other two scene text datasets MASR-TF500 and ICDAR2015, which is KLD > GWD > Smooth L1. In general, the self-modulation optimization mechanism has a significant help for high-precision detection. For a more intuitive comparison, we visually compare these three regression losses, as shown in Figure 2. Since the center point (x, y) parameters in Smooth L1 Loss and GWD are independently optimized, their prediction results are slightly shifted. In contrast, the KLD-based prediction results are closer to the object boundary and show strong robustness in dense scenes. Similarly, GWD-based or KLD-based model has more accurate angle prediction capabilities than Smooth L1-based model due to their angle parameters (θ) are not independently optimized. Ablation study on more datasets. To make the results more credible, we continue to verify on the other five datasets, as shown in Table 5. The improvement of KLD on the three data sets of MLT, UCAS-AOD and DOTA-v1.0 is still considerable, with an increase of 9.17%, 1.58%, and 5.55% respectively. Note that for DOTA-v1.5 and DOTA-v2.0, which contain a large number of small objects (less than 10 pixels), KLD has achieved significant gains of 3.63% and 3.53%. Comparison of peer methods. Table 6 compares the six peer techniques, including IoU-Smooth L1 Loss [3], Modulated loss [45], RIL [34], CSL [4, 47], DCL [46], and GWD [5] on DOTA-v1.0. For fairness, these methods are all implemented on the same baseline method, and are trained and tested under the same environment and hyperparameters. We detail the accuracy of the seven categories, including large aspect ratio (e.g. BR, SV, LV, SH, HA) and square-like object (e.g. ST, RD), which can better reflect the real-world challenges and advantages of our method. Without bells and whistles, the combination of RetinaNet and KLD directly surpasses R3Det (71.28% vs. 70.66% in AP50 and 69.41% vs. 68.31% in 7-AP50). Even combined with R3Det, KLD can still further improve performance of the large aspect ratio object (2.82% in 7-AP50) and high-precision detection (6.07% in AP75 and 3.65% AP50:95). KLD-based method shows the best performer in almost all indicators. Similar conclusions can still be drawn on the more challenging datasets (DOTA-v1.5 and DOTA-v2.0), which contain more data and tiny object (less than 10 pixels). Horizontal detection verification. As analyzed by Eq. 16, KLD can be degenerated into the common regression loss in horizontal detection task. Table 7 compares the regression loss Smooth L1 and IoU/GIoU for horizontal detection with the proposed regression loss KLD on MS COCO [48] dataset. The results show that our KLD is not worse than other losses on the Faster RCNN [8], RetinaNet [10] and FCOS [21], and even has an improvement of 0.6% on RetinaNet. The ground truth for rotation detection is the minimum circumscribed rectangle, which means that ground truth can well reflect the true scale and direction information of the object. The “horizontal special case” described in this paper also meets the above requirements, the horizontal circumscribed rectangle is equal to the minimum circumscribed rectangle at this time. Although the ground truth of the COCO is a horizontal box, it is not the minimum circumscribed rectangle, which means that it loses the direction information and accurate scale information of the object. For example, a baseball bat placed obliquely in the image, the height and width of its horizontal circumscribed rectangle do not represent the height and width of the object itself. This causes that when KLD is applied to the COCO, the optimization mechanism of KLD that dynamically adjusts the angle gradient according to the aspect ratio is meaningless, which affects the improvement of the final performance. In general, this is a defect in the dataset annotation itself, not that KLD is not good enough. In fact, it is inappropriate to use the COCO to discuss θ = 0◦, because the COCO discards θ parameter. In addition, θ = 0◦ describes the instances in the horizontal position, but not mean all instances of the dataset are in a horizontal position. This paper uses COCO to discuss the “horizontal special case” to express that even if the dataset has certain labeling defects, KLD can have certain effects. After all, it is difficult to observe the performance improvement of all horizontal objects on the rotating dataset. 4.3 Comparisons with the State-of-the-Art Methods The evaluation is performed on the DOTA, which contains a considerable number of categories, complexity scenes. Our single-scale model RetinaNet-KLD-R50 and R3Det-KLD-R50 achieve 75.28% and 77.36% respectively. They outperform multi-scale models as shown in Table 8. With large backbone and multi-scale testing, our method further achieves state-of-the-art accuracy 80.63%. 5 Discussions Limitations. Despite the theoretical grounds and the promising experimental justifications, our method has an obvious limitation that it cannot be directly applied to quadrilateral detection [34, 45]. Potential negative societal impacts. Our findings provides a simple regression loss for highprecision rotation detection. However, our research may be applied to some sensitive fields, such as remote sensing, aviation, and unmanned aerial vehicles. Conclusion. Departure from the vast existing literature in object detection, in this paper we have designed a new regression loss for rotation detection from scratch and consider the popular horizontal detection as its special case. Specifically, we calculate the KLD between the Gaussian distributions corresponding to the rotated bounding box as the regression loss, and we find that in the learning procedure guided by the KLD loss, the gradient of the parameters can be dynamically adjusted according to the characteristics of the object which is a desirable property for robust object detection, regardless its rotation, size and aspect ratio etc. We also proved that KLD has scale invariance, which is crucial for detection tasks. Interestingly, we have shown that KLD can be degenerated into the currently commonly used ln-norm loss in the horizontal detection task. Extensive experimental results across different detectors and datasets show the effectiveness of our approach. Acknowledgments and Disclosure of Funding This work was partly supported by National Key Research and Development Program of China (2018AAA0100704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and NSFC (U20B2068, 72061127003, 61972250). Xue Yang was also partly supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University, Shanghai, China.
1. What is the focus of the paper regarding object detection? 2. What is the proposed method for improving rotated object detection, and how does it differ from existing approaches? 3. What are the strengths of the paper, particularly in terms of its contributions and clarity? 4. Are there any limitations or areas for improvement in the proposed approach, especially regarding its practicality and comparison to other works? 5. Do you have any suggestions for additional analyses or comparisons that could enhance the paper's findings?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on learning high precision bounding boxes for rotated object detections. Instead of directly predicting the size of the box, they turn the box into a gaussian distribution and use the KL divergence between the gaussian distributions as the regression loss. They show that KLD is modulated where the optimization of one parameter will be affected by other hyper-parameters. For example, the gradients for the orientation involve the width and height of a box. Experiments show that the proposed loss function significantly improve the performance of a detector on different aerial image datasets and scene text datasets. Review The main contribution of this paper is that it shows why KLD is a better option than existing losses to train a network for rotated objects both analytically and empirically. Their results also generalize well across different datasets. Although this paper does not propose any new loss or new formulation of a task, this shows a strong result for future works on rotated object detection. Furthermore, this paper does a good job on explaining everything very clearly. It is very easy to follow. Overall, I think this is a good paper. One downside of this approach is that it does not have much advantage on horizontal object detection such COCO as shown in table 6, which limits its practicality, considering that some corner-based detectors [1] already perform well at high IoU level on COCO. Having said that, I do not consider this is a major issue as this works very well rotated object detection which is also valuable. The authors show that they are able to achieve a higher performance with a smaller network. So I would suggest the authors to include the inference time of their methods and compare it with other works. Hei Law, Jia Deng: CornerNet: Detecting Objects as Paired Keypoints. Int. J. Comput. Vis. 128(3): 642-656 (2020)
NIPS
Title Learning High-Precision Bounding Box for Rotated Object Detection via Kullback-Leibler Divergence Abstract Existing rotated object detectors are mostly inherited from the horizontal detection paradigm, as the latter has evolved into a well-developed area. However, these detectors are difficult to perform prominently in high-precision detection due to the limitation of current regression loss design, especially for objects with large aspect ratios. Taking the perspective that horizontal detection is a special case for rotated object detection, in this paper, we are motivated to change the design of rotation regression loss from induction paradigm to deduction methodology, in terms of the relation between rotation and horizontal detection. We show that one essential challenge is how to modulate the coupled parameters in the rotation regression loss, as such the estimated parameters can influence to each other during the dynamic joint optimization, in an adaptive and synergetic way. Specifically, we first convert the rotated bounding box into a 2-D Gaussian distribution, and then calculate the Kullback-Leibler Divergence (KLD) between the Gaussian distributions as the regression loss. By analyzing the gradient of each parameter, we show that KLD (and its derivatives) can dynamically adjust the parameter gradients according to the characteristics of the object. For instance, it will adjust the importance (gradient weight) of the angle parameter according to the aspect ratio. This mechanism can be vital for high-precision detection as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. More importantly, we have proved that KLD is scale invariant. We further show that the KLD loss can be degenerated into the popular ln-norm loss for horizontal detection. Experimental results on seven datasets using different detectors show its consistent superiority, and codes are available at https://github.com/yangxue0827/RotationDetection. 1 Introduction As a fundamental building block for visual analysis across aerial images, scene text etc., rotated object detection has recently been developed rapidly [1, 2, 3, 4, 5, 6], which benefit themselves from the well-established horizontal detection approaches [7, 8, 9, 10, 11]. Specifically, many works [12, 13, 14, 15] build themselves upon the previously established horizontal detection pipeline from an inductive perspective, as shown in Figure 1(a). However, these detectors are often unable to cope with challenging scenes well due to the limitations of current regression loss, such as large aspect ratio objects, dense scenes, etc., resulting in obvious disadvantages in high-precision detection. ∗Part of the work was done during an internship at Huawei Inc. †Correspondence author is Junchi Yan. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). (a) Previous methods follow the induction paradigm from special horizontal to general rotated detection. (b) Our proposed method adopts a deduction methodology from general rotated to special horizontal detection. Figure 1: Methodological road-map difference between horizontal detection (special case) and rotation detection (general case) in the previous methods [1, 12, 13, 14, 15] and the proposed method. In this paper, we take a step back, and aim to develop (from a deductive perspective) a unified regression framework for rotation detection and its special case: horizontal detection. In fact, our new framework enjoys a coherent property that it can be degenerated into the current commonly used regression loss (e.g. ln-norm) in special cases (horizontal detection), as shown in Figure 1(b). For a devising a rotation regression loss for high-precision rotation detection, one important observation is that the importance of different parameters to different types of objects can vary. For example, the angle parameter (θ) and the center point parameter (x, y) are important for large aspect ratio objects and small objects, respectively. In another word, it is conjectured that regression loss should be self-modulated during the learning process and calls for more dynamic optimization strategy. Inspired by the above ideas, we first convert the rotated bounding box B(x, y, h, w, θ) into a 2-D Gaussian distribution N (µ,Σ). As a standard distance metric, we then use the Kullback-Leibler Divergence (KLD) [16] to calculate the distribution distance between the predicted bounding box and ground truth as the regression loss. We compare KLD with Smooth L1 loss [7] and another distance metric, Gaussian Wasserstein Distance (GWD) [5, 17], and find that KLD has a more complete parameter optimization mechanism. In particular, by analyzing the gradient of the parameters during learning, we show that the optimization of one parameter will be affected by other parameters (as the gradient weight). It means that the model will adaptively adjust the optimization strategy given a specific configuration of an object for detection, as shown can lead to excellent performance in high-precision detection. In addition, KLD is proven scale invariant, which is an important property that Smooth L1 loss and GWD do not possess. As the horizontal bounding box is a special case of the rotated bounding box, we show that KLD can also be degenerated into the ln-norm loss as commonly used in existing horizontal detection pipeline. The highlights of this paper are four-folds: 1) Differing from the dominant existing practices that build rotation detectors heavily upon the horizontal detectors, we develop new rotation detection loss from scratch and show that it is coherent with existing horizontal detection protocol in its degenerated case for horizontal detection. 2) To achieve a more principled measurement between the prediction and ground truth, instead of computing the difference for each physically-meaningful parameter related to the bounding box which are in different scales and units, we innovatively convert the regression loss of rotation detection into the KLD of two 2-D Gaussian distributions, leading to a clean and coherent regression loss. 3) Through the gradient analysis of each parameter in KLD, we further find that the self-modulated optimization mechanism of KLD greatly promotes the improvement of high-precision detection, which verify the advantage of our loss design. More importantly, we have theoretically shown (in appendix) that KLD is scale invariant for detection, which is crucial for the rotation cases. 4) Extensive experimental results on seven public datasets and two popular detectors show the effectiveness of our approach, which achieves new state-of-the-art performance for rotation detection. The source codes [18] are made public available. 2 Background We first generally discuss the related works on both horizontal and rotated object detection. Then we summarize the current design paradigm of rotation regression loss from two kinds of methodologies, as shown in Figure 1: one is inductive that tries to develop the general rotation detection from the special and classic horizontal detection pipeline. While the other is deductive that aims to devise a general rotation detection pipeline with horizontal detection as its special case. 2.1 Related Works Horizontal object detection. Horizontal object detection which covers most existing detection literature, normally uses a horizontal bounding box to represent the object. The mainstream classical object detection algorithms can be roughly divided according to the following standards: Two[7, 8, 9, 11] or Single-stage [10, 19, 20] object detection, Anchor-free [21, 22, 23] or Anchor-based [8, 9, 10] object detection and CNN [8, 10, 21] or Transformer-based [24, 25] object detection. Although the pipelines may vary, the mainstream regression loss often uses the popular ln-norm loss (such as smooth L1 loss) or IoU-based loss (such as GIoU [26], and DIoU [27]). These abovementioned detectors have also been widely used in other scenarios and have achieved satisfactory performance. However, horizontal detectors do not provide accurate orientation and scale information. Rotated object detection. Recent advances in rotation detection [3, 4, 12, 14, 28] are mainly driven by adapting the horizontal object detectors with rotated bounding boxes to represent multi-oriented objects. To accurately predict the rotated bounding box, most rotation detection methods extend the ln-norm [12, 15, 29, 30, 31] used in horizontal detection, or construct a differentiable approximate IoU loss [3, 5, 32]. From scratch, we try to change the design of rotation regression loss from induction paradigm to deduction methodology, which in fact is a generalization to the horizontal case. In the following, we describe the existing works from the induction and deduction methodologies. 2.2 Inductive Thinking of Loss Design: from Special Horizon to General Rotation Detection Regression loss is a vital part of most current object detection algorithms. For horizontal bounding box regression, the model [7, 8, 9, 10, 11] mainly outputs four items for location and size: tpx = xp − xa wa , tpy = yp − ya ha , tpw = ln ( wp wa ) , tph = ln ( hp ha ) (1) to match the four targets from the ground truth ttx = xt − xa wa , tty = yt − ya ha , ttw = ln ( wt wa ) , tth = ln ( ht ha ) (2) where x, y, h, w denote the center coordinates, height and width, respectively. Variables xt, xa, xp are for the ground-truth box, anchor box, and predicted box, respectively (likewise for y, w, h). Extending the above horizontal case, existing rotation detection models [1, 12, 13, 14, 15] also use regression loss which simply involves an extra angle parameter θ: tpθ = f(θp − θa), t t θ = f(θt − θa) (3) where f(·) is used to deal with angular periodicity, such as trigonometric functions, modulo, etc. The overall regression loss for rotation detection is: Lreg = ln-norm (∆tx,∆ty,∆tw,∆th,∆tθ) (4) where ∆tx = tpx− ttx = ∆xwa , ∆ty = t p y− tty = ∆y ha , ∆tw = tpw− ttw = ln(wp/wt), ∆th = t p h− tth = ln(hp/ht), and ∆tθ = t p θ − ttθ = ∆θ. It can be seen that parameters are optimized independently, making the loss (or detection accuracy) sensitive to the under-fitting of any of the parameters. This mechanism is fatal to high-precision detection. Taking the left side of Figure 2 as an example, the detection result based on the Smooth L1 loss often shows the deviation of the center point or angle. Moreover, different types of objects have different sensitivity to these five parameters. For example, the angle parameter is very important for detecting objects with large aspect ratios. This requires to select an appropriate set of weights given a specific single object sample during the training, which is nontrivial or even unrealistic. 2.3 Deductive Thinking of Loss Design: from General Rotation to Special Horizon Detection To break the original inductive design paradigm, we adopt deductive paradigm to construct more accurate rotation regression loss. Here we rephrase the main idea in the recent work [5], which converts a arbitrary-oriented bounding box B(x, y, h, w, θ) into a 2-D Gaussian N (µ,Σ), as illustracted in Figure 3. Then the distance between two Gaussian is calculated as the final loss. Specifically, the conversion is: µ =(x, y)> Σ1/2 =RΛR> = ( cos θ − sin θ sin θ cos θ )( w 2 0 0 h 2 )( cos θ sin θ − sin θ cos θ ) = ( w 2 cos2 θ + h 2 sin2 θ w−h 2 cos θ sin θ w−h 2 cos θ sin θ w 2 sin2 θ + h 2 cos2 θ ) (5) where R represents the rotation matrix, and Λ represents the diagonal matrix of eigenvalues. The recent work [5] analyzes that the introduction of N (µ,Σ) can solve the inconsistency between metric and loss, boundary discontinuity and square-like problem. On this basis, we further study how to design high-precision detection regression loss through new parameter space. Our view is that the self-modulated mechanism is positively correlated with the final high-precision performance. Gaussian Wasserstein Distance. The Wasserstein distance [5, 17] between two probability measures Xp ∼ Np(µp,Σp) and Xt ∼ Nt(µt,Σt) expressed as: Dw(Np,Nt)2 = ‖µp − µt‖22︸ ︷︷ ︸ center distance + Tr(Σp + Σt − 2(Σ1/2p ΣtΣ1/2p )1/2)︸ ︷︷ ︸ coupling terms about hp ,wp and θp (6) Eq. 6 shows that the Gaussian Wasserstein Distance (GWD) is mainly divided into two parts: the distance between the center points (x, y) and the coupling terms about h, w and θ. Accordingly, the regression loss based on GWD can be regarded as a semi-coupled loss. Although GWD can greatly improve the performance of high-precision rotation detection due to the coupling between part of the parameters, the independent optimization of the center point make the detection result slightly shifted (see Figure 2). Note that GWD is not scale invariant, which is not detection friendly. When all the boxes are horizontal (θ = 0◦), Eq. 6 can be further simplified: Dhw(Np,Nt)2 =‖µp − µt‖22 + ‖Σ1/2p −Σ 1/2 t ‖ 2 F =(xp − xt)2 + (yp − yt)2 + ( (wp − wt)2 + (hp − ht)2 ) /4 =l2-norm(∆x,∆y,∆w/2,∆h/2) (7) where ‖ · ‖F is the Frobenius norm. Although Eq. 7 can still be used as the regression loss of horizontal detection, Eq. 4 and 7 are not completely consistent. Although GWD scheme has played a preliminary exploration of the deductive paradigm, it does not focus on achieving high-precision detection and scale invariance. In the following, we will propose our new approach based on the Kullback-Leibler divergence (KLD) [16]. 3 Proposed Approach Kullback-Leibler Divergence. To explore the more appropriate regression loss, we adopt the Kullback-Leibler divergence (KLD) [16]. Similarly, the KLD between two 2-D Gaussian is: Dkl(Np||Nt) = 1 2 (µp − µt)>Σ−1t (µp − µt)︸ ︷︷ ︸ term about xp and yp + 1 2 Tr(Σ−1t Σp) + 1 2 ln |Σt| |Σp|︸ ︷︷ ︸ coupling terms about hp ,wp and θp −1 (8) or Dkl(Nt||Np) = 1 2 (µp − µt)>Σ−1p (µp − µt) + 1 2 Tr(Σ−1p Σt) + 1 2 ln |Σp| |Σt|︸ ︷︷ ︸ chain coupling of all parameters −1 (9) It can be seen that each item in Dkl(Nt||Np) is composed of partial parameter coupling, which makes all parameters form a chain coupling relationship. In the optimization process of the KLD-based detector, the parameters influence each other and are jointly optimized which make optimization mechanism of the model is self-modulated. In contrast, Dkl(Np||Nt) and GWD are both semicoupled, but Dkl(Np||Nt) has a better central point optimization mechanism. Although KLD is asymmetric, we find that the optimization principles of these two forms are similar by analyzing the gradients of various parameters and experimental results. Take the relatively simple Dkl(Np||Nt) as an example, according to Eq. 5, each item of Eq. 8 can be expressed as (µp − µt)>Σ−1t (µp − µt) = 4 (∆x cos θt + ∆y sin θt) 2 w2t + 4 (∆y cos θt −∆x sin θt)2 h2t (10) Tr(Σ−1t Σp) = h2p w2t sin2 ∆θ + w2p h2t sin2 ∆θ + h2p h2t cos2 ∆θ + w2p w2t cos2 ∆θ (11) ln |Σt| |Σp| = ln h2t h2p + ln w2t w2p (12) where ∆x = xp − xt,∆y = yp − yt,∆θ = θp − θt. Analysis of high-precision detection. Without loss of generality, we set θt = 0◦, then ∂Dkl(µp) ∂µp = ( 4 w2t ∆x, 4 h2t ∆y )> (13) The weights 1/w2t and 1/h 2 t will make the model dynamically adjust the optimization of the object position according to the scale. For example, when the object scale is small or an edge is too short, the model will pay more attention to the optimization of the offset of the corresponding direction. For this kind of object, a slight deviation on the corresponding direction will often cause a sharp drop in IoU. When θt 6= 0◦, the gradient of the object offset (∆x and ∆y) will be dynamically adjusted according to the θt for better optimization. In contrast, the gradient of the center point in GWD and L2-norm are ∂Dw(µp) ∂µp = (2∆x, 2∆y)> and ∂L2(µp)∂µp = ( 2 w2a ∆x, 2h2a ∆y)>. The former cannot adjust the dynamic gradient according to the length and width of the object. The latter is based on the length and width of the anchor (wa, ha) to adjust the gradient instead of the target object (wt, ht), which is almost ineffective for those detectors [3, 13, 15, 28, 29, 33, 34] that use horizontal anchors for rotation detection. More importantly, they are not related to the angle of the target object. Therefore, the detection result of the GWD-based and Ln-norm models will show a slight deviation, while the detection result of the KLD-based model is quite accurate, as shown in Figure 2. For hp and wp, we have ∂Dkl(Σp) ∂ lnhp = h2p h2t cos2 ∆θ + h2p w2t sin2 ∆θ − 1, ∂Dkl(Σp) ∂ lnwp = w2p w2t cos2 ∆θ + w2p h2t sin2 ∆θ − 1 (14) On the one hand, the optimization of the hp and wp is affected by the ∆θ. When ∆θ = 0◦, ∂Dkl(Σp) ∂ lnhp = h2p h2t − 1, ∂Dkl(Σp)∂ lnwp = w2p w2t − 1, which means that the smaller targeted height or width leads to heavier penalty on its matching loss. This is desirable, as smaller height or width needs higher matching precision. On the other hand, the optimization of ∆θ is also affected by hp and wp: ∂Dkl(Σp) ∂θp = ( h2p − w2p w2t + w2p − h2p h2t ) sin 2∆θ (15) when wp = wt, hp = ht, then ∂Dkl(Σp) ∂θp = ( h2t w2t + w2t h2t − 2 ) sin 2∆θ ≥ sin 2∆θ, the condition for the equality sign is ht = wt. This shows that the larger the aspect ratio of the object, the model will pay more attention to the optimization of the angle. This is the main reason why the KLD-based model has a huge advantage in high-precision detection indicators as a slight angle error would cause a serious accuracy drop for large aspect ratios objects. Through the above analysis, we find that when one of the parameters is optimized, the other parameters will be used as its weight to dynamically adjust the optimization rate. In other words, the optimization of parameters is no longer independent, that is, optimizing one parameter will also promote the optimization of other parameters. The optimization of this virtuous circle is the key to KLD as an excellent rotation regression loss. In addition, Dkl(Nt||Np) has similar properties, refer to appendix for details. Scale invariance. For a full-rank matrix M, |M| 6= 0, we have Dkl(Np||Nt) = Dkl(Np′ ||Nt′ ), where Xp′ = MXp ∼ Np(Mµp,MΣpM>), Xt′ = MXt ∼ Nt(Mµt,MΣtM>). Therefore, the affine invariance (including scale invariance when M = kI, where I denotes identity matrix) of KLD can be proven (see proof in appendix). Compared with Ln-norm and GWD, KLD is more suitable for replacing the non-differentiable rotated IoU loss for its consistency with detection metric. Horizontal special case. For horizontal detection, combine Eq. 8 to Eq. 12, we have Dhkl(Np||Nt) = 1 2 ( w2p w2t + h2p h2t + 4∆2x w2t + 4∆2y h2t + ln w2t w2p + ln h2t h2p − 2 ) =2l2-norm(∆tx,∆ty) + l1-norm(∆tw,∆th) + 1 2 l2-norm( 1 ∆tw , 1 ∆th )− 1 (16) where the first two terms of Eq. 16 are very similar to Eq. 4, and the divisor part of the two terms x and y is the main difference ( ∆xwt vs. ∆x wa ). Variants of KLD. We have also introduced some variants [35, 36] of KLD to further verify the influence of asymmetry on rotation detection can be ignored. The variants mainly including Dkl_min(max)(Np||Nt) = min(max) (Dkl(Np||Nt),Dkl(Nt||Np)) Djs(Np||Nt) = 1 2 ( Dkl ( Nt|| Np +Nt 2 ) + Dkl ( Np|| Np +Nt 2 )) Djef (Np||Nt) =Dkl(Nt||Np) + Dkl(Np||Nt) (17) Rotation regression loss. The whole training process of detector is as follows: i) predict offset (tpx, t p y, t p w, t p h, t p θ); ii) decode prediction box; iii) convert prediction box and target ground-truth into Gaussian distribution; iv) calculate KLD of two Gaussian distributions. Therefore, the inference time remains unchanged. We normalize the distance function as our final regression loss Lreg: Lreg = 1− 1 τ + f(D) , τ ≥ 1 (18) where f(·) denotes a non-linear function to transform the distance D to make the loss more smooth and expressive. In this paper, we mainly use two nonlinear functions, sqrt(D) and ln(D + 1). The hyperparameter τ modulates the entire loss. The multi-task loss is: L = λ1 Npos Npos∑ n=1 Lreg(bn, gtn) + λ2 N N∑ n=1 Lcls(pn, tn) (19) where Npos and N indicate the number of positive and all anchors. bn denotes the n-th bounding box, gtn is the n-th target ground-truth. tn denotes the label of n-th object, pn is the n-th probability distribution of various classes calculated by sigmoid function. The hyper-parameter λ1, λ2 control the trade-off and are set to {2, 1} by default. The classification loss Lcls is set as focal loss [10]. 4 Experiment 4.1 Datasets and Implementation Details Our experiments are conducted over a variety of datasets, including three large-scale public datasets for aerial images i.e. DOTA [37], UCAS-AOD [38], HRSC2016 [39], as well as scene text dataset ICDAR2015 [40], MLT [41] and MSRA-TD500 [42]. DOTA is one of the largest dataset for oriented object detection in aerial images with three released versions: DOTA-v1.0, DOTA-v1.5 and DOTA-v2.0. DOTA-v1.0 contains 15 common categories, 2,806 images and 188,282 instances. The proportions of the training set, validation set, and testing set in DOTA-v1.0 are 1/2, 1/6, and 1/3, respectively. In contrast, DOTA-v1.5 uses the same images as DOTA-v1.0, but extremely small instances (less than 10 pixels) are also annotated. Moreover, a new category, containing 402,089 instances in total is added in this version. While DOTA-v2.0 contains 18 common categories, 11,268 images and 1,793,658 instances. Compared to DOTA-v1.5, it further includes the new categories. The 11,268 images in DOTA-v2.0 are split into training, validation, test-dev, and test-challenge sets. We divide the images into 600× 600 subimages with an overlap of 150 pixels and scale it to 800× 800, in line with the cropping protocol in literature [5, 28]. UCAS-AOD contains 1,510 aerial images of approximately 659× 1, 280 pixels, with two categories of 14,596 instances in total. In line with [31, 37], we randomly select 1,110 for training and 400 for testing. HRSC2016 contains images from two scenarios including ships on sea and ships close inshore. The training, validation and test set include 436, 181 and 444 images. ICDAR2015, MLT and MSRA-TD500 are commonly used for oriented scene text detection and spotting. ICDAR2015 includes 1,000 training images and 500 testing images. ICDAR2017 MLT is a multi-lingual text dataset, which includes 7,200 training images, 1,800 validation images and 9,000 testing images. MSRA-TD500 dataset consists of 300 training images and 200 testing images. We use Tensorflow [43] to implement the proposed methods on a server with Tesla V100 and 32G memory. The experiments are all initialized by ResNet50 [44] by default unless otherwise specified. Weight decay and momentum are set 0.0001 and 0.9, respectively. We employ MomentumOptimizer over 8 GPUs with a total of 8 images per minibatch (1 image per GPU). All the used datasets are trained by 20 epochs in total, and the learning rate is reduced tenfold at 12 epochs and 16 epochs, respectively. The initial learning rate is set to 5e-4. The number of image iterations per epoch for DOTA-v1.0, DOTA-v1.5, DOTA-v2.0, UCAS-AOD, HRSC2016, ICDAR2015, MLT and MSRA-TD500 are 54k, 64k, 80k, 5k, 10k, 10k, 10k and 5k respectively, and doubled if data augmentation (include random rotation, flipping, and graying) or multi-scale training is used. 4.2 Ablation Study and Further Comparison Regression loss form and hyperparameter. Table 1 compares three forms of KLD-based regression loss on HRSC2016, including Dkl, f(Dkl) and Lreg(f(Dkl), τ). Due to extreme sensitivity to large errors, the performance of Dkl is extremely poor, only 0.20%. Through a simple nonlinear linear transformation, the performance can be increased to 82.96% and 83.23% corresponding to sqrt and log. We further perform a detailed hyperparameter experiment on the loss Lreg proposed in this paper, and the performance reaches the optimal when τ = 1, f(Dkl) = log(Dkl + 1), about 85.25%. Keeping the same loss pattern, we compare six KLD-based distance functions in Table 2, and conclude that the asymmetry of KLD does not have much impact on performance. In subsequent experiments, we use Lreg(log(Dkl(Np||Nt)), 1) as the basic setting. Ablation study of normalization. As mentioned above, the use of Eq. 18 is to smooth its excessively rapid growth trend and play a role of normalization. This extra normalization questions if the KLD is actually contributing or simply produces noise in the results. In order to further prove that our method is indeed effective, we also perform a normalization operation on the Smooth L1 loss to eliminate the interference caused by normalization. As shown in Table 3, there is a significant drop in performance after using the normalization. The above experimental results prove that the effectiveness of KLD does not come from Eq. 18. High-precision detection experiment. We expect that the designed rotation regression loss can show advantages in high-precision detection. Table 4 shows the comparison of the high-precision detection results of three different regression losses using Smooth L1, GWD and KLD on different datasets and different detectors. For the HRSC2016 dataset containing a large number of ship with large aspect ratios, GWD-based RetinaNet has a 11.89% improvement over Smooth L1 on AP75, KLD even gets a 23.97% gain. Even with a stronger R3Det detector, KLD and GWD still increased by 33.96% and 22.46% in AP75, and 15.22% and 9.89% in AP50:95. The same experimental conclusion are also reflected in the other two scene text datasets MASR-TF500 and ICDAR2015, which is KLD > GWD > Smooth L1. In general, the self-modulation optimization mechanism has a significant help for high-precision detection. For a more intuitive comparison, we visually compare these three regression losses, as shown in Figure 2. Since the center point (x, y) parameters in Smooth L1 Loss and GWD are independently optimized, their prediction results are slightly shifted. In contrast, the KLD-based prediction results are closer to the object boundary and show strong robustness in dense scenes. Similarly, GWD-based or KLD-based model has more accurate angle prediction capabilities than Smooth L1-based model due to their angle parameters (θ) are not independently optimized. Ablation study on more datasets. To make the results more credible, we continue to verify on the other five datasets, as shown in Table 5. The improvement of KLD on the three data sets of MLT, UCAS-AOD and DOTA-v1.0 is still considerable, with an increase of 9.17%, 1.58%, and 5.55% respectively. Note that for DOTA-v1.5 and DOTA-v2.0, which contain a large number of small objects (less than 10 pixels), KLD has achieved significant gains of 3.63% and 3.53%. Comparison of peer methods. Table 6 compares the six peer techniques, including IoU-Smooth L1 Loss [3], Modulated loss [45], RIL [34], CSL [4, 47], DCL [46], and GWD [5] on DOTA-v1.0. For fairness, these methods are all implemented on the same baseline method, and are trained and tested under the same environment and hyperparameters. We detail the accuracy of the seven categories, including large aspect ratio (e.g. BR, SV, LV, SH, HA) and square-like object (e.g. ST, RD), which can better reflect the real-world challenges and advantages of our method. Without bells and whistles, the combination of RetinaNet and KLD directly surpasses R3Det (71.28% vs. 70.66% in AP50 and 69.41% vs. 68.31% in 7-AP50). Even combined with R3Det, KLD can still further improve performance of the large aspect ratio object (2.82% in 7-AP50) and high-precision detection (6.07% in AP75 and 3.65% AP50:95). KLD-based method shows the best performer in almost all indicators. Similar conclusions can still be drawn on the more challenging datasets (DOTA-v1.5 and DOTA-v2.0), which contain more data and tiny object (less than 10 pixels). Horizontal detection verification. As analyzed by Eq. 16, KLD can be degenerated into the common regression loss in horizontal detection task. Table 7 compares the regression loss Smooth L1 and IoU/GIoU for horizontal detection with the proposed regression loss KLD on MS COCO [48] dataset. The results show that our KLD is not worse than other losses on the Faster RCNN [8], RetinaNet [10] and FCOS [21], and even has an improvement of 0.6% on RetinaNet. The ground truth for rotation detection is the minimum circumscribed rectangle, which means that ground truth can well reflect the true scale and direction information of the object. The “horizontal special case” described in this paper also meets the above requirements, the horizontal circumscribed rectangle is equal to the minimum circumscribed rectangle at this time. Although the ground truth of the COCO is a horizontal box, it is not the minimum circumscribed rectangle, which means that it loses the direction information and accurate scale information of the object. For example, a baseball bat placed obliquely in the image, the height and width of its horizontal circumscribed rectangle do not represent the height and width of the object itself. This causes that when KLD is applied to the COCO, the optimization mechanism of KLD that dynamically adjusts the angle gradient according to the aspect ratio is meaningless, which affects the improvement of the final performance. In general, this is a defect in the dataset annotation itself, not that KLD is not good enough. In fact, it is inappropriate to use the COCO to discuss θ = 0◦, because the COCO discards θ parameter. In addition, θ = 0◦ describes the instances in the horizontal position, but not mean all instances of the dataset are in a horizontal position. This paper uses COCO to discuss the “horizontal special case” to express that even if the dataset has certain labeling defects, KLD can have certain effects. After all, it is difficult to observe the performance improvement of all horizontal objects on the rotating dataset. 4.3 Comparisons with the State-of-the-Art Methods The evaluation is performed on the DOTA, which contains a considerable number of categories, complexity scenes. Our single-scale model RetinaNet-KLD-R50 and R3Det-KLD-R50 achieve 75.28% and 77.36% respectively. They outperform multi-scale models as shown in Table 8. With large backbone and multi-scale testing, our method further achieves state-of-the-art accuracy 80.63%. 5 Discussions Limitations. Despite the theoretical grounds and the promising experimental justifications, our method has an obvious limitation that it cannot be directly applied to quadrilateral detection [34, 45]. Potential negative societal impacts. Our findings provides a simple regression loss for highprecision rotation detection. However, our research may be applied to some sensitive fields, such as remote sensing, aviation, and unmanned aerial vehicles. Conclusion. Departure from the vast existing literature in object detection, in this paper we have designed a new regression loss for rotation detection from scratch and consider the popular horizontal detection as its special case. Specifically, we calculate the KLD between the Gaussian distributions corresponding to the rotated bounding box as the regression loss, and we find that in the learning procedure guided by the KLD loss, the gradient of the parameters can be dynamically adjusted according to the characteristics of the object which is a desirable property for robust object detection, regardless its rotation, size and aspect ratio etc. We also proved that KLD has scale invariance, which is crucial for detection tasks. Interestingly, we have shown that KLD can be degenerated into the currently commonly used ln-norm loss in the horizontal detection task. Extensive experimental results across different detectors and datasets show the effectiveness of our approach. Acknowledgments and Disclosure of Funding This work was partly supported by National Key Research and Development Program of China (2018AAA0100704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and NSFC (U20B2068, 72061127003, 61972250). Xue Yang was also partly supported by Wu Wen Jun Honorary Doctoral Scholarship, AI Institute, Shanghai Jiao Tong University, Shanghai, China.
1. What is the main contribution of the paper regarding object detection? 2. What are the weaknesses of the paper, particularly in terms of explanations, motivations, references, and proofreading? 3. Do you have any concerns regarding the choice of loss function and its scale invariance? 4. How does the reviewer assess the significance and novelty of the proposed algorithm compared to other state-of-the-art detectors? 5. What are the limitations of the current implementation and experiments that need to be improved?
Summary Of The Paper Review
Summary Of The Paper The paper proposes an algorithm that can detect slender objects, like text in images or boats in aerial images. The work mainly focuses on the bounding box regression task and includes a parameter for box rotation. Bounding boxes are represented as Gaussian functions instead of axis-aligned boxes. The scale invariant KL-divergence loss is applied to the regression task. An analysis of the loss function proclaims that the computation of gradients, during optimization, are adjusted with respect to the bounding box parameters. Review The paper is not reproducible, motivations and explanations are incomplete. This paper is not yet ready for NeurIPS 2021. The paper gives a vague description of the problem, why is it important and what are the challenges. There is insufficient motivation. A clear description of why and the final end-use should be included. In this end-use, why are rotated bounding boxes better compared to another high- precision object representation, for example segmentation. A clear motivation of why it is more correct to directly predict bounding box parameters, rather then parameter offsets, is not given. Both can be equally good depending on prior assumptions. Line 229, what kind of data augmentation? It is a strong claim to say that horizontal bounding boxes are a special case of something else without any references. Equations (4) and (16) are not very similar. There are large differences, more terms and also different norms. Missing references. Claims: lines 102, 104-105, 141-144, 212. Terms: lines 275, 276, 279. Which methods do you refer to when you say they are not scale invariant? In SSD, the offsets are very near a standard normal distribution. It is unclear why a scale invariant loss versus a simple scaling of the loss is beneficial. You want to convey that the KL-loss dynamically adjust gradients w.r.t width and height. However, the details do not clearly explain this statement. E.g. is function f (13) defined here or explained somewhere else? Usually L1-norm is better when there are outliers. Here outliers are allowed to contribute more to the loss, due to the quadratic terms in the KL-divergence loss. From the ablation this is solved by (18). This extra normalization questions if the KL-divergence is actually contributing or simply produces noise in the results. This loss would have been better motivated if Table 6 was extended with more of the state-of-the-art detectors, where axis aligned boxes are used. References [6,7,8,9,10] do not include an extra log term in the bounding box regression. In (4) and extra log term is added. Line 63, not well motivated or explained. Line 191, is f the same as in (13), (14), (15)? Needs proof reading. E.g. line 38, (13), line 143, line 194, line 279. Update after rebuttal: several questions have been addressed in the rebuttal. However, question 8 is not sufficiently accurately addressed. Also: what does "We will discuss above in detail in the final version." mean concretely. Given the responses from the reviewers, I have raised my assessment, but the paper is still not of sufficient quality for acceptance. Update after discussion with the authors: The disrespectful tone during the discussion is unprofessional and not suitable for a scientific top tier conference. The rating is changed accordingly.
NIPS
Title Algorithms with Prediction Portfolios Abstract The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick the best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm’s performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance. 1 Introduction An exciting recent line of research attempts to go beyond traditional worst-case analysis of algorithms by equipping algorithms with machine-learned predictions. The hope is that these predictions allow the algorithm to circumvent worst case lower bounds when the predictions are good, and approximately match them otherwise. The precise definitions and guarantees vary with different settings, but there have been significant successes in applying this framework for many different algorithmic problems, ranging from general online problems to classical graph algorithms (see Section 1.2 for a more detailed discussion of related work, and [33] for a survey). In all of these settings it turns out to be possible to define a “prediction” where the “quality” of the algorithm (competitive ratio, running time, etc.) depends the “error” of the prediction. Moreover, in at least some of these settings, it has been further shown that this prediction is actually learnable with a small number of samples, usually via standard ERM methods [18]. Previous work has shown the power of accurate predictions, and there are numerous examples showing improved performance in both theory and practice. However, developing accurate predictors remains an art, and a single predictor may not capture all of the subtleties of the instance space. Recently, researchers have turned to working with portfolios of predictors: instead of training a single model, train multiple models, with the hope that one of them will give good guarantees. ∗Work was done while the author was at Carnegie Mellon University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). It is easy to see why the best predictor in a portfolio may be significantly better than a one-size fits all predictor. First, many of the modern machine learning methods come with a slew of hyperparameters that require tuning. Learning rate, mini-batch size, optimizer choice, all of these have significant impact on the quality of the final solution. Instead of commiting to a single setting, one can instead try to cover the parameter space, with the hope that some of the predictors will generalize better than others. Second, problem instances themselves may come from complex distributions, consisting of many latent groups or clusters. A single predictor is forced to perform well on average, whereas multiple predictors can be made to “specialize” to each cluster. In order to take advantage of the increased accuracy provided by the portfolio approach, we must adapt algorithms with predictions to take advantage of multiple predictions. To capture the gains in performance, the algorithm must perform as if equipped with the best predictor, auto-tuning to use the best one available in the portfolio. However, it is easy to see that there should be a cost as the size of the portfolio grows. In the extreme, one can add every possible prediction to the portfolio, providing no additional information, yet now requiring high performance from the algorithm. Therefore, we must aim to minimize the dependence on the number of predictions in the portfolio. We remark that the high level set up may be reminiscent of expert- or bandit-learning literature. However, there is a critical distinction. In expert and bandit learning, we are given a sequence of problem instances, and the goal is to compete (minimize regret) with respect to the best prediction averaged over the whole sequence. On the other hand, in our setup, we aim to compete with the best predictor on a per-instance basis. Previous work on multiple predictions. Bhaskara et al. studied an online linear optimization problem where the learner seeks to minimize the regret, provided access to multiple hints [16]. Inspired by the work, Anand et al. recently studied algorithms with multiple learned predictions in [7], proving strong bounds for important online covering problems including online set cover, weighted caching, and online facility location. It was a significant extension of the work [22] which studied the rent-or-buy problem with access to two predictions. However, their techniques and results are limited to online covering problems. Moreover, they do not discuss the learning aspects at all: they simply assume that they are given k predictions, and their goal is to have competitive ratios that are based on the minimum error of any of the k predictions. (They actually compete against a stronger dynamic benchmark, but for our purposes this distinction is not important.) On the other hand Balcan et al. [14] look at this problem through a data driven algorithm lens and study the sample complexity and generalization error of working with k (as opposed to 1) parameter settings. The main difference from our work is that they also aim learn a selector, which selects one of the k parameters prior to beginning to solve the problem instance. In contrast, in this work we make the selection during the course of the algorithm, and sometimes switch back and forth while honing in on the best predictor. 1.1 Our Results and Contributions In this paper we study three fundamental problems, min-cost perfect matching, online load balancing, and non-clairvoyant scheduling for total completion time, in this new setting. Each of these has seen significant success in the single-prediction model but is not covered by previous multiple-prediction frameworks. Our results are primarily theoretical, however we have included a preliminary empirical validation of our algorithm for min-cost perfect matching in the supplementary material. For each of these we develop algorithms whose performance depends on the error of the best prediction, and explore the effect of the number of predictions, k. Surprisingly, in the case of matching and scheduling we show that using a limited number of predictions is essentially free, and has no asymptotic impact on the algorithm’s performance. For load balancing, on the other hand, we show that the cost of multiple predictions grows logarithmically with k, again implying a tangible benefit of using multiple predictions. We now describe these in more detail. Min-Cost Perfect Matching. We begin by showcasing our approach with the classical min-cost perfect matching problem in Section 3. This problem was recently studied by [17, 18] to show that it is possible to use learned predictions to improve running times of classical optimization problems. In particular, [18] showed it is possible to speed up the classical Hungarian algorithm by predicting dual values, and moreover that it is possible to efficiently (PAC-)learn the best duals. We show that simple modifications of their ideas lead to similar results for multiple predictions. Interestingly, we show that as long as k ≤ O( √ n), the extra “cost” (running time) of using k predictions is negligible compared to the cost of using a single prediction, so we can use up to √ n predictions “for free” while still getting running time depending on the best of these predictions. Moreover, since in this setting running time is paramount, we go beyond sample complexity to show that it is also computationally efficient to learn the best k predictions. Online Load Balancing with Restricted Assignments. We continue in Section 4 with the fundamental load balancing problem. In this problem there are m machines, and n jobs which appear in online fashion. Each job has a size, and a subset of machines that it can be assigned to. The goal is to minimize the maximum machine load (i.e., the makespan). This problem has been studied extensively in the traditional scheduling and online algorithms literature, and recently it has also been the subject of significant study given a single prediction [26–28]. In particular, Lattanzi, Lavastida, Moseley, and Vassilvitskii [26] showed that there exist per machine “weights” and an allocation function so that the competitive ratio of the algorithm depends logarithmically on the maximum error of the predictions. We show that one can use k predictions and incur an additional O(log k) factor in the competitive ratio, while being competitive with the error of the best prediction. Additionally, we show that learning the best k predicted weights (in a PAC sense) can be done efficiently. Non Clairvoyant Scheduling Finally, in Section 5 we move to the most technically complex part of this paper. We study the problem of scheduling n jobs on a single machine, where all jobs are released at time 0, but where we do not learn the length of a job until it actually completes (the non-clairvoyant model). Our objective is to minimize the sum of completion times. This problem has been studied extensively, both with and without predictions [24, 30, 35, 37]. Most recently, Lindermayr and Megow [30] suggested that we use an ordering as the prediction (as opposed to the more obvious prediction of job sizes), and use the difference between the cost induced by the predicted ordering and the cost induced by the instance-optimal ordering as the notion of “error”. In this case, simply following the predicted ordering yields an algorithm with error equal to the prediction error. We extend this to the multiple prediction setting, which turns out to be surprisingly challenging. The algorithm of [30] is quite simple: follow the ordering given by the prediction (and run a 2-competitive algorithm in parallel to obtain a worst-case backstop). But we obviously cannot do this when we are given multiple orderings! So we must design an algorithm which considers all k predictions to build a schedule that has error comparable to the error of the best one. Slightly more formally, we prove that we can bound the sum of completion times by (1 + ϵ)OPT plus poly(1/ϵ) times the error of the best prediction, under the mild assumption that no set of at most log log n jobs has a large contribution to OPT. To do this, we first use sampling techniques similar to those of [24] to estimate the size of the approximately ϵn’th smallest job without incurring much cost. We then use even more sampling and partial processing to determine for each prediction whether its ϵn prefix has many jobs that should appear later (a bad sequence) or has very few jobs that should not be in the prefix (a good sequence). If all sequences are bad then every prediction has large error, so we can use a round robin schedule and charge the cost to the prediction error. Otherwise, we choose one of the good orderings and follow it for its ϵn prefix (being careful to handle outliers). We then recurse on the remaining jobs. 1.2 Related Work As discussed, the most directly related papers are Anand et al. [7] and Balcan, Sandholm, and Vitercik [14]; these give the two approaches (multiple predictions and portfolio-based algorithm selection) that are most similar to our setting. The single prediction version of min-cost bipartite matching was studied in [17, 18], the single prediction version of our load balancing problem was considered by [26–28] (and a different though related load balancing problem was considered by [4]), and the single prediction version of our scheduling problem was considered by [30] with the same prediction that we use (an ordering) and earlier with different predictions by [24, 37, 39]. Online scheduling with estimates of the true processing times was considered in [11, 12]. More generally, there has been an enormous amount of recent progress on algorithms with predictions. This is particularly true for online algorithms, where the basic setup was formalized by [31] in the context of caching. For example, the problems considered include caching [25, 31, 38], secretary problems [9, 20], ski rental [5, 37, 39], and set cover [15]. There has also been recent work on going beyond traditional online algorithms, including work on running times [17, 18], algorithmic game theory [2, 21, 32], and streaming algorithms [1, 19, 23]. The learnability of predictions for online algorithms with predictions was considered by [6]. They give a novel loss function tailored to their specific online algorithm and prediction, and study the sample complexity of learning a mapping from problem features to a prediction. While they are only concerned with the sample complexity of the learning problem, we also consider the computational complexity, giving polynomial time O(1)-approximate algorithms for the learning problems associated with min-cost matching and online load balancing. The above is only a small sample of the work on algorithms with predictions. We refer the interested reader to a recent survey [33], as well as a recently set up website which maintains a list of papers in the area [29]. 2 Learnability When designing new methods in the algorithms with predictions setting, the predictions under consideration must satisfy two constraints. First, they should be useful to the algorithm, so that using the predictions allows the algorithm to achieve better running time, competitive ratio, or some other performance measure. Second, they must be learnable: it must be feasible to find good predictions given a set of problem instances. To rigorously prove learnability, we follow previous work [13, 18, 34] and focus on proving a bound on the sample complexity of finding the best predictions that generalize. Our main result shows that for a given problem, the pseudo-dimension of finding k predictions is Õ(k)2 factor larger than that for finding a single best predictor. We state the formal Theorem below, but defer the proof to the supplementary material. Theorem 2.1. Let F be a class of functions f : X → R with pseudo-dimension d and let Fk := {F (x) = minℓ∈[k] f ℓ(x) | f1, f2, . . . , fk ∈ F}. Then the pseudo-dimension of Fk is at most Õ(dk). Note that this directly implies that the sample complexity when looking for k predictions is a factor of k larger than that of a single predictor by the following well-known theorem. Theorem 2.2. [8, 34, 36] Let D be a distribution over a domain X and F be a class of functions f : X → [0, H] with pseudo-dimension dF . Consider S independent samples x1, x2, . . . , xS from D. There is a universal constant c0, such that for any ϵ > 0 and δ ∈ (0, 1), if S ≥ c0 ( H ϵ )2 (dF+ln(1/δ)) then we have ∣∣∣∣∣1s S∑ s=1 f(xi)− Ex∼D[f(x)] ∣∣∣∣∣ ≤ ϵ for all f ∈ F with probability at least 1− δ. 3 Minimum Cost Bipartite Matching with Predicted Duals In this section we study the minimum cost bipartite matching problem with multiple predictions. The case of a single prediction has been considered recently [17, 18], where they used dual values as a prediction and showed that the classical Hungarian algorithm could be sped up by using appropriately learned dual values. Our goal in this section is to extend these results to multiple predictions, i.e., multiple duals. In particular, in Section 3.2 we show that we can use k duals and get running time comparable to the time we would have spent if we used the single best of them in the algorithm of [18], with no asymptotic loss if k is at most O( √ n). Then in Section 3.3 we show that k predictions can be learned with not too many more samples (or running time) than learning a single prediction. 2Õ(·) suppresses logarithmic factors 3.1 Problem Definition and Predicted Dual Variables In the minimum cost bipartite matching problem we are given a bipartite graph G = (V,E) with n = |V | vertices and m = |E| edges, with edge costs c ∈ ZE . The objective is to output a perfect matching M ⊆ E which minimizes the cost c(M) := ∑ e∈E ce. This problem is exactly captured by the following primal and dual linear programming formulations. min ∑ e∈E cexe (MWPM-P)∑ e∈N(i) xe = 1 ∀i ∈ V xe ≥ 0 ∀e ∈ E max ∑ i∈V yi (MWPM-D) yi + yj ≤ ce ∀e = ij ∈ E Dinitz et al. [18] studied initializing the Hungarian algorithm with a prediction ŷ of the optimal dual solution y∗. They propose an algorithm which operates in two steps. First, the predicted dual solution ŷ may not be feasible, so they give an O(n+m) time algorithm which recovers feasibility (which we refer to as Make-Feasible). Second, the now-feasible dual solution is used in a primal-dual algorithm such as the Hungarian algorithm (which we refer to as Primal-Dual) and they show that the running time depends on the ℓ1 error in the predicted solution. In addition to this they show that learning a good initial dual solution is computationally efficient with low sample complexity. More formally, they proved the following theorems. Theorem 3.1 (Dinitz et al. [18]). Let (G, c) be an instance of minimum cost bipartite matching and ŷ be a prediction of an optimal dual solution y∗. There exists an algorithm which returns an optimal solution and runs in time O(m √ n · ∥y∗ − ŷ∥1). Theorem 3.2 (Dinitz et al. [18]). Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs a solution ŷ such that E(G,c)∼D [∥y∗(G, c)− ŷ∥1] ≤ min y E(G,c)∼D [∥y∗(G, c)− y∥1] + ϵ with probability 1− δ where S = poly(n, 1ϵ , 1 δ ). 3.2 Using k Predicted Dual Solutions Efficiently Given k predicted dual solutions ŷ1, ŷ2, . . . , ŷk, we would like to efficiently determine which solution has the minimum error for the given problem instance. Note that the predicted solutions may still be infeasible and that we do not know the target optimal dual solution y∗. We propose the following simple algorithm which takes as input k predicted solutions and whose running time depends only on the ℓ1 error of the best predicted solution. First, we make each predicted solution feasible, just as before. Next, we select the (now-feasible) dual solution with highest dual objective value and proceed running the primal-dual algorithm with only that solution. See Algorithm 1 for pseudo-code. Algorithm 1 Minimum cost matching with k predicted dual solutions 1: procedure k-PREDICTEDPRIMAL-DUAL(G, c, ŷ1, ŷ2, . . . , ŷk) 2: for ℓ ∈ [k] do 3: yℓ ←MakeFeasible(G, c, ŷℓ) 4: end for 5: ℓ′ ← argmaxℓ∈[k] ∑ i∈V y ℓ i 6: M ←Primal-Dual(G, c, yℓ′) 7: Return M 8: end procedure We have the following result concerning Algorithm 1. To interpret this result, note that the cost for increasing the number of predictions is O(k(n+m)), which will be dominated by the m √ n term we pay for running the Hungarian algorithm unless k is extremely large (certainly larger than √ n) or there is a prediction with 0 error (which is highly unlikely). Hence we can reap the benefit of a large number of predictions “for free”. Theorem 3.3. Let (G, c) be a minimum cost bipartite matching instance and let ŷ1, ŷ2, . . . , ŷk be predicted dual solutions. Algorithm 1 returns an optimal solution and runs in time O(k(n+m) + m √ n ·minℓ∈[k] ∥y∗ − ŷℓ∥1). We defer the proof to the supplementary material. But correctness is essentially direct from [18], and the running time requires just a simple modification of the analysis of [18]. 3.3 Learning k Predicted Dual Solutions Next we extend Theorem 3.2 to the setting where we output k predictions. Let D be a distribution over problem instances (G, c) on n vertices. We show that we can find the best set of k predictions. More formally, we prove the following theorem. Theorem 3.4. Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs k solutions ŷ1, ŷ2, . . . , ŷk such that E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− ŷℓ∥1 ] ≤ O(1) · min y1,y2,...,yk E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− yℓ∥1 ] + ϵ with probability 1− δ where S = poly(n, k, 1ϵ , 1 δ ). The proof of this theorem can be found in the supplementary material, but it is straightforward. The sample complexity is due to combining Theorem 2.1 with Theorem 3.2 (or more precisely, with the pseudo-dimension bound which implies Theorem 3.2). The O(1)-approximation factor and polynomial running time is from the observation that the ERM problem in this setting is just an instance of the k-median clustering problem. 4 Online Load Balancing with Predicted Machine Weights We now apply our framework to online load balancing with restricted assignments. In particular, we consider proportional weights, which have been considered in prior work [26–28]. Informally, we show in Section 4.2 that if β is the cost of the best of the k predictions, then even without knowing a priori which prediction is best, we get cost of O(β log k). Then in Section 4.3 we show that it does not take many samples to actually learn the best k predictions. 4.1 Problem Definition and Proportional Weights In online load balancing with restricted assignments there is a sequence of n jobs which must be assigned to m machines in an online fashion. Upon seeing job j, the online algorithm observes its size pj > 0 and a neighborhood N(j) ⊆ [m] of feasible machines. The algorithm must then choose some feasible machine i ∈ N(j) to irrevocably assign the job to before seeing any more jobs in the sequence. We also consider fractional assignments, i.e. vectors belonging to the set X = {x ∈ Rm×n+ | ∀j ∈ [n], ∑ i xij = 1, and xij = 0 ⇐⇒ i /∈ N(j)}. Prior work studied the application of proportional weights[3, 26–28]. Intuitively, a prediction in this setting is a weighting of machines, which then implies an online assignment, which is shown to be near-optimal. Slightly more formally, suppose that we are given weights wi for each machine i. Then each job j is fractionally assigned to machine i to a fractional amount of wi∑ i′∈N(j) wi′ . Notice that given weights, this also gives an online assignment. It is known that there exist weights for any instance where the fractional solution has a near optimal makespan, even though there are only m “degree of freedom” in the weights compared to mn in an assignment. That is, for all machines i,∑ j∈[n] pj · wi∑ i′∈N(j) wi′ is at most a (1 + ϵ) factor larger than the optimal makespan for any constant ϵ > 0 [3, 26]. Let w∗ be a set of near optimal weights for a given instance. Lattanzi et al. [26] showed the following theorem: Theorem 4.1. Given predicted weights ŵ, there is an online fractional algorithm which has makespan O(log(η(ŵ, w∗)OPT), where η(ŵ, w∗) := maxi∈[m] max( ŵiw∗i , w∗i ŵi ) to be the error in the prediction. Moreover, this fractional assignment can be converted online to an integral assignment while losing only an O(log logm) factor in the makespan [26, 28]. Thus, we focus on constructing fractional assignments that are competitive with the best prediction in hindsight. 4.2 Combining Fractional Solutions Online Given k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk, we want to give an algorithm which is competitive against the minimum error among the predicted weights, i.e. we want the competitiveness to depend upon ηmin := minℓ∈[k] η(ŵℓ, w∗). The challenge is that we do not know up front which ℓ ∈ [k] will yield the smallest error, but instead learn this in hindsight. For each ℓ ∈ [k], let xℓ, be the resulting fractional assignment from applying the fractional online algorithm due to [26] with weights ŵℓ. This fractional assignment is revealed one job at a time. We give an algorithm which is O(log k)-competitive against any collection of k fractional assignments which are revealed online. Moreover, our result applies to the unrelated machines setting, in which each job has a collection of machine-dependent sizes {pij}i∈[m]. The algorithm is based on the doubling trick and is similar to results in [10] which apply to metrical task systems. Let β := minℓ∈[k] maxi ∑ j pijx ℓ ij be the best fractional makespan in hindsight. As in previous work, our algorithm is assumed to know β, an assumption that can be removed [26]. At a high level, our algorithm maintains a set A ⊆ [k] of solutions which are good with respect to the current value of β, averaging among these. See Algorithm 2 for a detailed description. We have the following theorem. Theorem 4.2. Let x1, x2, . . . , xk be fractional assignments which are revealed online. If Algorithm 2 is run with β := minℓ∈[k] maxi ∑ j pijx ℓ ij , then it yields a solution of cost O(log k) · β and never reaches the fail state (line 7 in Algorithm 2). Let βℓ = maxi ∑ j pijx ℓ ij and OPT be the optimal makespan. Theorem 4.1 shows that βℓ ≤ O(log ηℓ)OPT. The following corollary is then immediate: Corollary 4.3. Let w1, w2, . . . , wk be the predicted weights with errors η1, η2, . . . , ηk. Then Algorithm 2 returns a fractional assignment with makespan at most OPT ·O(log k) ·minℓ∈[k] log(ηℓ). Algorithm 2 Algorithm for combining fractional solutions online for load balancing. 1: procedure COMBINE-LOADBALANCING(β) 2: A← [k] ▷ Initially all solutions are good 3: for each job j do 4: Receive the assignments x1, x2, . . . , xk 5: A(j, β)← {ℓ ∈ A | ∀i ∈ [m], xℓij > 0 =⇒ pijxℓij ≤ β} 6: if A = ∅ or A(j, β) = ∅ then 7: Return “Fail” 8: end if 9: ∀i ∈ [m], xij ← 1|A(j,β)| ∑ ℓ∈A(j,β) x ℓ ij 10: B ← {ℓ ∈ A | maxi∈[m] ∑ j′≤j pij′x ℓ ij′ > β} ▷ Bad solutions w.r.t. β 11: A← A \B 12: end for 13: end procedure We defer the proof of Theorem 4.2 to the Supplementary material. 4.3 Learning k Predicted Weight Vectors We now turn to the question of showing how to learn k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk. Recall that there is an unknown distribution D over sets of n jobs from which we receive independent samples J1, J2, . . . , JS . Our goal is to show that we can efficiently learn (in terms of sample complexity) k predicted weight vectors to minimize the expected minimum error. Let w∗(J) be the correct weight vector for instance J and let η(w,w′) = maxi∈[m] max(wiw′i , w′i wi ) be the error between a pair of weight vectors. We have the following result. Theorem 4.4. Let D be an unknown distribution over restricted assignment instances on n jobs and let w∗(J) be a set of good weights for instance J . Given S independent samples from D, there is a polynomial time algorithm that outputs k weight vectors ŵ1, ŵ2, . . . , ŵk such that EJ∼D [ minℓ∈[k] log(η(ŵ ℓ, w∗(J)) ] ≤ O(1) · minw1,w2,...,wk E [ minℓ∈[k] log(η(w ℓ, w∗(J)) ] + ϵ with probability 1− δ, where S = poly(m, k, 1ϵ , 1 δ ) The proof of Theorem 4.4 is deferred to the Supplementary material, but we note that to get a polynomial time algorithm we carry out an interesting reduction to k-median clustering. Namely, we show that the function d(w,w′) := log(η(w,w′)) satisfies the triangle inequality and thus forms a metric space. 5 Scheduling with Predicted Permutations In this problem there are n jobs, indexed by 1, 2, . . . , n, to be scheduled on a single machine. We assume that they are all available at time 0. Job j has size pj and needs to get processed for pj time units to complete. If all job sizes are known a priori, Shortest Job First (or equivalently Shortest Remaining Time First), which processes jobs in non-decreasing order of their size, is known to be optimal for minimizing total completion time. We assume that the true value of pj is unknown and is revealed only when the job completes, i.e. the non-clairvoyant setting. In the non-clairvoyant setting, it is known that Round-Robin (which processes all alive jobs equally) is 2-competitive and that this is the best competitive ratio one can hope for [35]. We study this basic scheduling problem assuming certain predictions are available for use. Following the recent work by Lindermayr and Megow [30], we will assume that we are given k orderings/sequences as prediction, {σℓ}ℓ∈[k]. Each σℓ is a permutation of J := [n]. Intuitively, it suggests an ordering in which jobs should be processed. This prediction is inspired by the aforementioned Shortest Job First (SJF) as an optimal schedule can be described as an ordering of jobs, specifically increasing order of job sizes. For each σℓ, its error is measured as η(J, σℓ) := COST(J, σℓ)−OPT(J), where COST(J, σℓ) denotes the objective of the schedule where jobs are processed in the order of σℓ and OPT(J) denotes the optimal objective value. We may drop J from notation when it is clear from the context. As observed in [30], the error can be expressed as η(J, σℓ) = ∑ i<j∈J I ℓ i,j · |pi − pj |, where Iℓi,j is an indicator variable for ‘inversion’ that has value 1 if and only if σℓ predicts the pairwise ordering of i and j incorrectly. That is, if pi < pj , then the optimal schedule would process i before j; here Iℓi,j = 1 iff i ≻σℓ j. As discussed in [30], this error measure satisfies two desired properties, monotonicity and Lipschitzness, which were formalized in [24]. Our main result is the following. Theorem 5.1. Consider a constant ϵ > 0. Suppose that for any S ⊆ J with |S| = Θ( 1ϵ4 (log log n+ log k + log(1/ϵ))), we have OPT(S) ≤ cϵ · OPT(J) for some small absolute constant c. Then, there exists a randomized algorithm that yields a schedule whose expected total completion time is at most (1 + ϵ)OPT + (1 + ϵ) 1ϵ5 η(J, σℓ) for all ℓ ∈ [k]. As a corollary, by running our algorithm with 1− ϵ processing speed and simultaneously running Round-Robin with the remaining ϵ of the speed, the cost increases by a factor of at most 11−ϵ while the resulting hybrid algorithm is 2/ϵ-competitive.3 5.1 Algorithm To make our presentation more transparent we will first round job sizes. Formally, we choose ρ uniformly at random from [0, 1). Then, round up each job j’s size to the closest number of the form (1 + ϵ)ρ+t for some integer t. Then, we scale down all job sizes by (1 + ϵ)ρ factor. We will present our algorithm and analysis assuming that every job has a size equal to a power of (1 + ϵ). In the 3This hybrid algorithm is essentially the preferential time sharing [24, 30, 37]. Formally, we run our algorithm ignoring RR’s processing and also run RR ignoring our algorithm; this can be done by a simple simulation. Thus, we construct two schedules concurrently and each job completes at the time when it does in either schedule. This type of algorithms was first used in [37]. supplementary we show how to remove this assumption without increasing our algorithm’s objective by more than 1 + ϵ factor in expectation. We first present the following algorithm that achieves Theorem 5.1 with |S| = Θ( 1ϵ4 (log n+ log k)). The improved bound claimed in the theorem needs minor tweaks of the algorithm and analysis and they are deferred to the supplementary material. Our algorithm runs in rounds. Let Jr be the jobs that complete in round r ≥ 1. For any subset S of rounds, JS := ∪r∈SJr. For example, J≤r := J1 ∪ . . . ∪ Jr. Let nr := |J≥r| = n− |J<r| denote the number of alive jobs at the beginning of round r. Fix the beginning of round r. The algorithm processes the job in the following way for this round. If nr ≤ 1ϵ4 (log n+ log k), we run Round-Robin to complete all the remaining jobs, J≥r. This is the last round and it is denoted as round L+ 1. Otherwise, we do the following Steps 1-4: Step 1. Estimating ϵ-percentile. Roughly speaking, the goal is to estimate the ϵ-percentile of job sizes among the remaining jobs. For a job j ∈ J≥r, define its rank among J≥r as the number of jobs no smaller than j in J≥r breaking ties in an arbitrary yet fixed way. Ideally, we would like to estimate the size of job of rank ϵnr, but do so only approximately. The algorithm will find q̃r that is the size of a job whose rank lies in [ϵ(1 − ϵ)nr, ϵ(1 + ϵ)nr]. To handle the case that there are many jobs of the same size q̃r, we estimate yr the number of jobs no bigger than q̃r; let ỹr denote our estimate of yr. We will show how we can do these estimations without spending much time by sampling some jobs and partially processing them in Round-Robin manner (the proof of the following lemma can be found in the supplementary material.) Lemma 5.2. W.h.p. the algorithm can construct estimates q̃r and ỹr in time at most O(q̃r 1ϵ2 log n) such that there is a job of size q̃r whose rank lies in [ϵ(1− ϵ)nr, ϵ(1 + ϵ)nr] and |ỹr − yr| ≤ ϵ2nr. Step 2. Determining Good and Bad Sequences. Let σrℓ denote σℓ with all jobs completed in the previous rounds removed and with the relative ordering of the remaining jobs fixed. Let σrℓ,ϵ denote the first ỹr jobs in the ordering. We say a job j is big if pj > q̃r; middle if pj = q̃r; small otherwise. Using sampling and partial processing we will approximately distinguish good and bad sequences. Informally σrℓ is good if σ r ℓ,ϵ has few big jobs and bad if it does many big jobs. The proof of the following lemma can be found in the supplementary material. Lemma 5.3. For all ℓ ∈ [k], we can label sequence σrℓ either good or bad in time at most O(q̃r 1 ϵ2 (log n + log k)) that satisfies the following with high probability: If it is good, σ r ℓ,ϵ has at most 3ϵ2nr big jobs; otherwise σrℓ,ϵ has at least ϵ 2nr big jobs. Step 3. Job Processing. If all sequences are bad, then we process all jobs, each up to q̃r units in an arbitrary order. Otherwise, we process the first ỹr jobs in an arbitrary good sequence, in an arbitrary order, each up to q̃r units. Step 4. Updating Sequences. The jobs completed in this round drop from the sequences but the remaining jobs’ relative ordering remains fixed in each (sub-)sequence. For simplicity, we assume that partially processed jobs were never processed—this is without loss of generality as this assumption only increases our schedule’s objective. 5.2 Analysis of the Algorithm’s Performance We defer the analysis of the above algorithm (the proof of Theorem 5.1) to the supplementary material, as it is quite technical and complex. At a very high level, though, we use the fact that the error in each prediction can be decomposed into pair-wise inversions, and moreover we can partition the inversions into the rounds of the algorithm in which they appear. Then we look at each round, and split into two cases. First, if all sequences are bad then every prediction has large error, so we can simply use Round Robin (which is 2-competitive against OPT) and the cost can be charged to the error of any prediction. Second, if there is a good sequence, then in any good sequence the number of big jobs is small (so we do not spend much time processing them), and we therefore complete almost all of the non-big jobs. Here, we crucially use the fact that we can process the first ϵ fraction of jobs in a sequence in an arbitrary order remaining competitive against the sequence. Finally, we show that all of the additional assumptions and costs (e.g., rounding processing times and the cost due to sampling) only change our performance by a 1 + ϵ factor. Getting all of these details right requires much care. 5.3 Learning k Predicted Permutations Now we show that learning the best k permutations has polynomial sample complexity. Theorem 5.4. Let D be an unknown distribution of instances on n jobs. Given S independent samples from D, there is an algorithm that outputs k permutations σ̂1, σ̂2, . . . , σ̂k such that EJ∼D [ minℓ∈[k] η(J, σ̂ℓ) ] ≤ minσ1,σ2,...,σk EJ∼D [ minℓ∈[k] η(J, σℓ) ] + ϵ with probability 1 − δ, where S = poly(n, k, 1ϵ , 1 δ ). Proof. The algorithm is basic ERM, and the polynomial sample complexity follows from Theorem 2.1 and Theorem 20 in Lindermayr and Megow [30]. 6 Conclusion Despite the explosive recent work in algorithms with predictions, almost all of this work has assumed only a single prediction. In this paper we study algorithms with multiple machine-learned predictions, rather than just one. We study three different problems that have been well-studied in the single prediction setting but not with multiple predictions: faster algorithms for min-cost bipartite matching using learned duals, online load balancing with learned machine weights, and non-clairvoyant scheduling with order predictions. For all of the problems we design algorithms that can utilize multiple predictions, and show sample complexity bounds for learning the best set of k predictions. Demonstrating the effectiveness of our algorithms (and the broader use of multiple predictions) empirically is an interesting direction for further work. Surprisingly, we have shown that in some cases, using multiple predictions is essentially “free.” For instance, in the case of min-cost perfect matching examining k = O( √ n) predictions takes the same amount of time as one round of the Hungarian algorithm, but the number of rounds is determined by the quality of the best prediction. In contrast, for load balancing, using k predictions always incurs an O(log k) cost, so using a constant number of predictions may be best. More generally, studying this trade-off between the cost and the benefit of multiple predictions for other problems remains an interesting and challenging open problem. Acknowledgments and Disclosure of Funding Michael Dinitz was supported in part by NSF grant CCF-1909111. Sungjin Im was supported in part by NSF grants CCF-1617653, CCF-1844939 and CCF-2121745. Thomas Lavastida and Benjamin Moseley were supported in part by NSF grants CCF-1824303, CCF-1845146, CCF-2121744 and CMMI-1938909. Benjamin Moseley was additionally supported in part by a Google Research Award, an Infor Research Award, and a Carnegie Bosch Junior Faculty Chair.
1. What is the main contribution of the paper regarding machine-learned predictions? 2. What are the strengths of the paper, particularly in its careful writing and motivation? 3. What are the weaknesses of the paper, such as the mismatch between the main paper and the supplementary, and the lack of clarity in some parts? 4. How does the quality of the predictors affect the performance of the final algorithm? 5. Why were no experiments conducted, and what kind of experiments would be interesting to support the theory? 6. Was there a specific reason for choosing these three problems, and how can the results be generalized to other problems?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Considering the fact that the worst-case performance of algorithms could be improved by equipping them with good machine-learned predictions, together with the increased use of "multiple" machine-learned models for solving a specific task, with each of them being specialized on a part of the problem, the paper tries to adapt algorithms for the case where multiple machine-learned predictors are used instead of a single one. They try to design algorithms for this case and bound the overhead caused compared to the single predictor case (since now the best predictor should be identified among the k predictors) and the error that the algorithm incurs compared to the case where it is given the best predictor only. They investigate this problem for 3 fundamental problems being bipartite matching, load balancing and clairvoyant scheduling. They start by proving that the complexity of looking for k predictions is a factor of k larger than that of a single predictor. They further move to the three specific problems. First, they consider the min-cost bipartite matching problem with multiple predictions available. They show the running time of the state of the art algorithm for solving this problem for the case of single prediction is comparable to that of considering k predictions when k ≤ O ( n ) which suggests that learning with k predictions is coming for free in this case. They further consider the problem of online load balancing (where a weight vector is to be learned to distribute a task between several machines with the goal of minimizing the maximum machine load) and show that there is a polynomial time algorithm that can output k weight vectors which can minimize the expected minimum error with some bounded error which comes with a logaritmic cost with respect to k . Finally they consider the problem of scheduling n jobs on a machine when the time that each process takes for completion is not known prior to the completion of the process (clairvoyant scheduling). The goal is to minimize the sum of the completion times. They show that learning the best k permutations is doable in polynomial time with bounded error compared to the best predictor. Strengths And Weaknesses Strengths: The paper considers a very well motivated problem. The paper is very carefully written. I believe there will be many applications and room for future work. Generalizing the results of the paper for any k predictors solving any task and then bounding the error with respect to the error of each of the predictors would be very interesting. Weaknesses: The supplementary does not match the main. Supplementary is an extended version of the paper and does not include only the proofs of the Theorems or Propositions stated in the main. Some of the mismatches: Theorem 2.1 in the main is Theorem 2.3 in the supplementary, Thorem 5.4 in the main is Theorem 5.7 in the supplementary etc. Section 2 is not self-contained. Without referring to the supplementary, it is almost impossible to understand what this section is trying to say or what the results are. Since most of the coming results in other sections are based on Theorem 2.1, I think it is important that enough details about this Theorem is provided in the main. The writing could be improved. Sometimes it is very hard to understand what you try to convey by the wording you use. I could not understand the sentence written lines 23-25. I also could not really understand what you try to do in the paper by only reading the abstract. More clear explanations would be great. -Typo in line 25 Although all the results are proven and there is no need for experiments to show that the theory works in practice, I believe experiments are missing from the paper and one could design many interesting experimental setups to show that the theory is working in practice. Questions How is the quality of the predictors affecting the performance of the final algorithm which takes into account all predictors? For each of the algorithms proposed, which of the following is more fabvorable? to have one very accurate and several very inaccurate predictors, or to have good performance in all predictors? Are the results also dependant on the performance of the worst predictor? Why are there no experiments conducted? It would be nice to see whether the running times in practice also match the ones stated in the paper or see the quality of the found solution compared to the algorithm using only the best predictor. Also having multiple machines with different performances and then analyzing the performance of the algorithm would be very interesting. Another set of interesting experiment is to vary k . I believe one can think of many more interesting experimental setups that could support the theory. Was there a reason for taking this 3 problems? How do you think one can generalize the result for other problems? Limitations NA
NIPS
Title Algorithms with Prediction Portfolios Abstract The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick the best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm’s performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance. 1 Introduction An exciting recent line of research attempts to go beyond traditional worst-case analysis of algorithms by equipping algorithms with machine-learned predictions. The hope is that these predictions allow the algorithm to circumvent worst case lower bounds when the predictions are good, and approximately match them otherwise. The precise definitions and guarantees vary with different settings, but there have been significant successes in applying this framework for many different algorithmic problems, ranging from general online problems to classical graph algorithms (see Section 1.2 for a more detailed discussion of related work, and [33] for a survey). In all of these settings it turns out to be possible to define a “prediction” where the “quality” of the algorithm (competitive ratio, running time, etc.) depends the “error” of the prediction. Moreover, in at least some of these settings, it has been further shown that this prediction is actually learnable with a small number of samples, usually via standard ERM methods [18]. Previous work has shown the power of accurate predictions, and there are numerous examples showing improved performance in both theory and practice. However, developing accurate predictors remains an art, and a single predictor may not capture all of the subtleties of the instance space. Recently, researchers have turned to working with portfolios of predictors: instead of training a single model, train multiple models, with the hope that one of them will give good guarantees. ∗Work was done while the author was at Carnegie Mellon University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). It is easy to see why the best predictor in a portfolio may be significantly better than a one-size fits all predictor. First, many of the modern machine learning methods come with a slew of hyperparameters that require tuning. Learning rate, mini-batch size, optimizer choice, all of these have significant impact on the quality of the final solution. Instead of commiting to a single setting, one can instead try to cover the parameter space, with the hope that some of the predictors will generalize better than others. Second, problem instances themselves may come from complex distributions, consisting of many latent groups or clusters. A single predictor is forced to perform well on average, whereas multiple predictors can be made to “specialize” to each cluster. In order to take advantage of the increased accuracy provided by the portfolio approach, we must adapt algorithms with predictions to take advantage of multiple predictions. To capture the gains in performance, the algorithm must perform as if equipped with the best predictor, auto-tuning to use the best one available in the portfolio. However, it is easy to see that there should be a cost as the size of the portfolio grows. In the extreme, one can add every possible prediction to the portfolio, providing no additional information, yet now requiring high performance from the algorithm. Therefore, we must aim to minimize the dependence on the number of predictions in the portfolio. We remark that the high level set up may be reminiscent of expert- or bandit-learning literature. However, there is a critical distinction. In expert and bandit learning, we are given a sequence of problem instances, and the goal is to compete (minimize regret) with respect to the best prediction averaged over the whole sequence. On the other hand, in our setup, we aim to compete with the best predictor on a per-instance basis. Previous work on multiple predictions. Bhaskara et al. studied an online linear optimization problem where the learner seeks to minimize the regret, provided access to multiple hints [16]. Inspired by the work, Anand et al. recently studied algorithms with multiple learned predictions in [7], proving strong bounds for important online covering problems including online set cover, weighted caching, and online facility location. It was a significant extension of the work [22] which studied the rent-or-buy problem with access to two predictions. However, their techniques and results are limited to online covering problems. Moreover, they do not discuss the learning aspects at all: they simply assume that they are given k predictions, and their goal is to have competitive ratios that are based on the minimum error of any of the k predictions. (They actually compete against a stronger dynamic benchmark, but for our purposes this distinction is not important.) On the other hand Balcan et al. [14] look at this problem through a data driven algorithm lens and study the sample complexity and generalization error of working with k (as opposed to 1) parameter settings. The main difference from our work is that they also aim learn a selector, which selects one of the k parameters prior to beginning to solve the problem instance. In contrast, in this work we make the selection during the course of the algorithm, and sometimes switch back and forth while honing in on the best predictor. 1.1 Our Results and Contributions In this paper we study three fundamental problems, min-cost perfect matching, online load balancing, and non-clairvoyant scheduling for total completion time, in this new setting. Each of these has seen significant success in the single-prediction model but is not covered by previous multiple-prediction frameworks. Our results are primarily theoretical, however we have included a preliminary empirical validation of our algorithm for min-cost perfect matching in the supplementary material. For each of these we develop algorithms whose performance depends on the error of the best prediction, and explore the effect of the number of predictions, k. Surprisingly, in the case of matching and scheduling we show that using a limited number of predictions is essentially free, and has no asymptotic impact on the algorithm’s performance. For load balancing, on the other hand, we show that the cost of multiple predictions grows logarithmically with k, again implying a tangible benefit of using multiple predictions. We now describe these in more detail. Min-Cost Perfect Matching. We begin by showcasing our approach with the classical min-cost perfect matching problem in Section 3. This problem was recently studied by [17, 18] to show that it is possible to use learned predictions to improve running times of classical optimization problems. In particular, [18] showed it is possible to speed up the classical Hungarian algorithm by predicting dual values, and moreover that it is possible to efficiently (PAC-)learn the best duals. We show that simple modifications of their ideas lead to similar results for multiple predictions. Interestingly, we show that as long as k ≤ O( √ n), the extra “cost” (running time) of using k predictions is negligible compared to the cost of using a single prediction, so we can use up to √ n predictions “for free” while still getting running time depending on the best of these predictions. Moreover, since in this setting running time is paramount, we go beyond sample complexity to show that it is also computationally efficient to learn the best k predictions. Online Load Balancing with Restricted Assignments. We continue in Section 4 with the fundamental load balancing problem. In this problem there are m machines, and n jobs which appear in online fashion. Each job has a size, and a subset of machines that it can be assigned to. The goal is to minimize the maximum machine load (i.e., the makespan). This problem has been studied extensively in the traditional scheduling and online algorithms literature, and recently it has also been the subject of significant study given a single prediction [26–28]. In particular, Lattanzi, Lavastida, Moseley, and Vassilvitskii [26] showed that there exist per machine “weights” and an allocation function so that the competitive ratio of the algorithm depends logarithmically on the maximum error of the predictions. We show that one can use k predictions and incur an additional O(log k) factor in the competitive ratio, while being competitive with the error of the best prediction. Additionally, we show that learning the best k predicted weights (in a PAC sense) can be done efficiently. Non Clairvoyant Scheduling Finally, in Section 5 we move to the most technically complex part of this paper. We study the problem of scheduling n jobs on a single machine, where all jobs are released at time 0, but where we do not learn the length of a job until it actually completes (the non-clairvoyant model). Our objective is to minimize the sum of completion times. This problem has been studied extensively, both with and without predictions [24, 30, 35, 37]. Most recently, Lindermayr and Megow [30] suggested that we use an ordering as the prediction (as opposed to the more obvious prediction of job sizes), and use the difference between the cost induced by the predicted ordering and the cost induced by the instance-optimal ordering as the notion of “error”. In this case, simply following the predicted ordering yields an algorithm with error equal to the prediction error. We extend this to the multiple prediction setting, which turns out to be surprisingly challenging. The algorithm of [30] is quite simple: follow the ordering given by the prediction (and run a 2-competitive algorithm in parallel to obtain a worst-case backstop). But we obviously cannot do this when we are given multiple orderings! So we must design an algorithm which considers all k predictions to build a schedule that has error comparable to the error of the best one. Slightly more formally, we prove that we can bound the sum of completion times by (1 + ϵ)OPT plus poly(1/ϵ) times the error of the best prediction, under the mild assumption that no set of at most log log n jobs has a large contribution to OPT. To do this, we first use sampling techniques similar to those of [24] to estimate the size of the approximately ϵn’th smallest job without incurring much cost. We then use even more sampling and partial processing to determine for each prediction whether its ϵn prefix has many jobs that should appear later (a bad sequence) or has very few jobs that should not be in the prefix (a good sequence). If all sequences are bad then every prediction has large error, so we can use a round robin schedule and charge the cost to the prediction error. Otherwise, we choose one of the good orderings and follow it for its ϵn prefix (being careful to handle outliers). We then recurse on the remaining jobs. 1.2 Related Work As discussed, the most directly related papers are Anand et al. [7] and Balcan, Sandholm, and Vitercik [14]; these give the two approaches (multiple predictions and portfolio-based algorithm selection) that are most similar to our setting. The single prediction version of min-cost bipartite matching was studied in [17, 18], the single prediction version of our load balancing problem was considered by [26–28] (and a different though related load balancing problem was considered by [4]), and the single prediction version of our scheduling problem was considered by [30] with the same prediction that we use (an ordering) and earlier with different predictions by [24, 37, 39]. Online scheduling with estimates of the true processing times was considered in [11, 12]. More generally, there has been an enormous amount of recent progress on algorithms with predictions. This is particularly true for online algorithms, where the basic setup was formalized by [31] in the context of caching. For example, the problems considered include caching [25, 31, 38], secretary problems [9, 20], ski rental [5, 37, 39], and set cover [15]. There has also been recent work on going beyond traditional online algorithms, including work on running times [17, 18], algorithmic game theory [2, 21, 32], and streaming algorithms [1, 19, 23]. The learnability of predictions for online algorithms with predictions was considered by [6]. They give a novel loss function tailored to their specific online algorithm and prediction, and study the sample complexity of learning a mapping from problem features to a prediction. While they are only concerned with the sample complexity of the learning problem, we also consider the computational complexity, giving polynomial time O(1)-approximate algorithms for the learning problems associated with min-cost matching and online load balancing. The above is only a small sample of the work on algorithms with predictions. We refer the interested reader to a recent survey [33], as well as a recently set up website which maintains a list of papers in the area [29]. 2 Learnability When designing new methods in the algorithms with predictions setting, the predictions under consideration must satisfy two constraints. First, they should be useful to the algorithm, so that using the predictions allows the algorithm to achieve better running time, competitive ratio, or some other performance measure. Second, they must be learnable: it must be feasible to find good predictions given a set of problem instances. To rigorously prove learnability, we follow previous work [13, 18, 34] and focus on proving a bound on the sample complexity of finding the best predictions that generalize. Our main result shows that for a given problem, the pseudo-dimension of finding k predictions is Õ(k)2 factor larger than that for finding a single best predictor. We state the formal Theorem below, but defer the proof to the supplementary material. Theorem 2.1. Let F be a class of functions f : X → R with pseudo-dimension d and let Fk := {F (x) = minℓ∈[k] f ℓ(x) | f1, f2, . . . , fk ∈ F}. Then the pseudo-dimension of Fk is at most Õ(dk). Note that this directly implies that the sample complexity when looking for k predictions is a factor of k larger than that of a single predictor by the following well-known theorem. Theorem 2.2. [8, 34, 36] Let D be a distribution over a domain X and F be a class of functions f : X → [0, H] with pseudo-dimension dF . Consider S independent samples x1, x2, . . . , xS from D. There is a universal constant c0, such that for any ϵ > 0 and δ ∈ (0, 1), if S ≥ c0 ( H ϵ )2 (dF+ln(1/δ)) then we have ∣∣∣∣∣1s S∑ s=1 f(xi)− Ex∼D[f(x)] ∣∣∣∣∣ ≤ ϵ for all f ∈ F with probability at least 1− δ. 3 Minimum Cost Bipartite Matching with Predicted Duals In this section we study the minimum cost bipartite matching problem with multiple predictions. The case of a single prediction has been considered recently [17, 18], where they used dual values as a prediction and showed that the classical Hungarian algorithm could be sped up by using appropriately learned dual values. Our goal in this section is to extend these results to multiple predictions, i.e., multiple duals. In particular, in Section 3.2 we show that we can use k duals and get running time comparable to the time we would have spent if we used the single best of them in the algorithm of [18], with no asymptotic loss if k is at most O( √ n). Then in Section 3.3 we show that k predictions can be learned with not too many more samples (or running time) than learning a single prediction. 2Õ(·) suppresses logarithmic factors 3.1 Problem Definition and Predicted Dual Variables In the minimum cost bipartite matching problem we are given a bipartite graph G = (V,E) with n = |V | vertices and m = |E| edges, with edge costs c ∈ ZE . The objective is to output a perfect matching M ⊆ E which minimizes the cost c(M) := ∑ e∈E ce. This problem is exactly captured by the following primal and dual linear programming formulations. min ∑ e∈E cexe (MWPM-P)∑ e∈N(i) xe = 1 ∀i ∈ V xe ≥ 0 ∀e ∈ E max ∑ i∈V yi (MWPM-D) yi + yj ≤ ce ∀e = ij ∈ E Dinitz et al. [18] studied initializing the Hungarian algorithm with a prediction ŷ of the optimal dual solution y∗. They propose an algorithm which operates in two steps. First, the predicted dual solution ŷ may not be feasible, so they give an O(n+m) time algorithm which recovers feasibility (which we refer to as Make-Feasible). Second, the now-feasible dual solution is used in a primal-dual algorithm such as the Hungarian algorithm (which we refer to as Primal-Dual) and they show that the running time depends on the ℓ1 error in the predicted solution. In addition to this they show that learning a good initial dual solution is computationally efficient with low sample complexity. More formally, they proved the following theorems. Theorem 3.1 (Dinitz et al. [18]). Let (G, c) be an instance of minimum cost bipartite matching and ŷ be a prediction of an optimal dual solution y∗. There exists an algorithm which returns an optimal solution and runs in time O(m √ n · ∥y∗ − ŷ∥1). Theorem 3.2 (Dinitz et al. [18]). Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs a solution ŷ such that E(G,c)∼D [∥y∗(G, c)− ŷ∥1] ≤ min y E(G,c)∼D [∥y∗(G, c)− y∥1] + ϵ with probability 1− δ where S = poly(n, 1ϵ , 1 δ ). 3.2 Using k Predicted Dual Solutions Efficiently Given k predicted dual solutions ŷ1, ŷ2, . . . , ŷk, we would like to efficiently determine which solution has the minimum error for the given problem instance. Note that the predicted solutions may still be infeasible and that we do not know the target optimal dual solution y∗. We propose the following simple algorithm which takes as input k predicted solutions and whose running time depends only on the ℓ1 error of the best predicted solution. First, we make each predicted solution feasible, just as before. Next, we select the (now-feasible) dual solution with highest dual objective value and proceed running the primal-dual algorithm with only that solution. See Algorithm 1 for pseudo-code. Algorithm 1 Minimum cost matching with k predicted dual solutions 1: procedure k-PREDICTEDPRIMAL-DUAL(G, c, ŷ1, ŷ2, . . . , ŷk) 2: for ℓ ∈ [k] do 3: yℓ ←MakeFeasible(G, c, ŷℓ) 4: end for 5: ℓ′ ← argmaxℓ∈[k] ∑ i∈V y ℓ i 6: M ←Primal-Dual(G, c, yℓ′) 7: Return M 8: end procedure We have the following result concerning Algorithm 1. To interpret this result, note that the cost for increasing the number of predictions is O(k(n+m)), which will be dominated by the m √ n term we pay for running the Hungarian algorithm unless k is extremely large (certainly larger than √ n) or there is a prediction with 0 error (which is highly unlikely). Hence we can reap the benefit of a large number of predictions “for free”. Theorem 3.3. Let (G, c) be a minimum cost bipartite matching instance and let ŷ1, ŷ2, . . . , ŷk be predicted dual solutions. Algorithm 1 returns an optimal solution and runs in time O(k(n+m) + m √ n ·minℓ∈[k] ∥y∗ − ŷℓ∥1). We defer the proof to the supplementary material. But correctness is essentially direct from [18], and the running time requires just a simple modification of the analysis of [18]. 3.3 Learning k Predicted Dual Solutions Next we extend Theorem 3.2 to the setting where we output k predictions. Let D be a distribution over problem instances (G, c) on n vertices. We show that we can find the best set of k predictions. More formally, we prove the following theorem. Theorem 3.4. Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs k solutions ŷ1, ŷ2, . . . , ŷk such that E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− ŷℓ∥1 ] ≤ O(1) · min y1,y2,...,yk E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− yℓ∥1 ] + ϵ with probability 1− δ where S = poly(n, k, 1ϵ , 1 δ ). The proof of this theorem can be found in the supplementary material, but it is straightforward. The sample complexity is due to combining Theorem 2.1 with Theorem 3.2 (or more precisely, with the pseudo-dimension bound which implies Theorem 3.2). The O(1)-approximation factor and polynomial running time is from the observation that the ERM problem in this setting is just an instance of the k-median clustering problem. 4 Online Load Balancing with Predicted Machine Weights We now apply our framework to online load balancing with restricted assignments. In particular, we consider proportional weights, which have been considered in prior work [26–28]. Informally, we show in Section 4.2 that if β is the cost of the best of the k predictions, then even without knowing a priori which prediction is best, we get cost of O(β log k). Then in Section 4.3 we show that it does not take many samples to actually learn the best k predictions. 4.1 Problem Definition and Proportional Weights In online load balancing with restricted assignments there is a sequence of n jobs which must be assigned to m machines in an online fashion. Upon seeing job j, the online algorithm observes its size pj > 0 and a neighborhood N(j) ⊆ [m] of feasible machines. The algorithm must then choose some feasible machine i ∈ N(j) to irrevocably assign the job to before seeing any more jobs in the sequence. We also consider fractional assignments, i.e. vectors belonging to the set X = {x ∈ Rm×n+ | ∀j ∈ [n], ∑ i xij = 1, and xij = 0 ⇐⇒ i /∈ N(j)}. Prior work studied the application of proportional weights[3, 26–28]. Intuitively, a prediction in this setting is a weighting of machines, which then implies an online assignment, which is shown to be near-optimal. Slightly more formally, suppose that we are given weights wi for each machine i. Then each job j is fractionally assigned to machine i to a fractional amount of wi∑ i′∈N(j) wi′ . Notice that given weights, this also gives an online assignment. It is known that there exist weights for any instance where the fractional solution has a near optimal makespan, even though there are only m “degree of freedom” in the weights compared to mn in an assignment. That is, for all machines i,∑ j∈[n] pj · wi∑ i′∈N(j) wi′ is at most a (1 + ϵ) factor larger than the optimal makespan for any constant ϵ > 0 [3, 26]. Let w∗ be a set of near optimal weights for a given instance. Lattanzi et al. [26] showed the following theorem: Theorem 4.1. Given predicted weights ŵ, there is an online fractional algorithm which has makespan O(log(η(ŵ, w∗)OPT), where η(ŵ, w∗) := maxi∈[m] max( ŵiw∗i , w∗i ŵi ) to be the error in the prediction. Moreover, this fractional assignment can be converted online to an integral assignment while losing only an O(log logm) factor in the makespan [26, 28]. Thus, we focus on constructing fractional assignments that are competitive with the best prediction in hindsight. 4.2 Combining Fractional Solutions Online Given k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk, we want to give an algorithm which is competitive against the minimum error among the predicted weights, i.e. we want the competitiveness to depend upon ηmin := minℓ∈[k] η(ŵℓ, w∗). The challenge is that we do not know up front which ℓ ∈ [k] will yield the smallest error, but instead learn this in hindsight. For each ℓ ∈ [k], let xℓ, be the resulting fractional assignment from applying the fractional online algorithm due to [26] with weights ŵℓ. This fractional assignment is revealed one job at a time. We give an algorithm which is O(log k)-competitive against any collection of k fractional assignments which are revealed online. Moreover, our result applies to the unrelated machines setting, in which each job has a collection of machine-dependent sizes {pij}i∈[m]. The algorithm is based on the doubling trick and is similar to results in [10] which apply to metrical task systems. Let β := minℓ∈[k] maxi ∑ j pijx ℓ ij be the best fractional makespan in hindsight. As in previous work, our algorithm is assumed to know β, an assumption that can be removed [26]. At a high level, our algorithm maintains a set A ⊆ [k] of solutions which are good with respect to the current value of β, averaging among these. See Algorithm 2 for a detailed description. We have the following theorem. Theorem 4.2. Let x1, x2, . . . , xk be fractional assignments which are revealed online. If Algorithm 2 is run with β := minℓ∈[k] maxi ∑ j pijx ℓ ij , then it yields a solution of cost O(log k) · β and never reaches the fail state (line 7 in Algorithm 2). Let βℓ = maxi ∑ j pijx ℓ ij and OPT be the optimal makespan. Theorem 4.1 shows that βℓ ≤ O(log ηℓ)OPT. The following corollary is then immediate: Corollary 4.3. Let w1, w2, . . . , wk be the predicted weights with errors η1, η2, . . . , ηk. Then Algorithm 2 returns a fractional assignment with makespan at most OPT ·O(log k) ·minℓ∈[k] log(ηℓ). Algorithm 2 Algorithm for combining fractional solutions online for load balancing. 1: procedure COMBINE-LOADBALANCING(β) 2: A← [k] ▷ Initially all solutions are good 3: for each job j do 4: Receive the assignments x1, x2, . . . , xk 5: A(j, β)← {ℓ ∈ A | ∀i ∈ [m], xℓij > 0 =⇒ pijxℓij ≤ β} 6: if A = ∅ or A(j, β) = ∅ then 7: Return “Fail” 8: end if 9: ∀i ∈ [m], xij ← 1|A(j,β)| ∑ ℓ∈A(j,β) x ℓ ij 10: B ← {ℓ ∈ A | maxi∈[m] ∑ j′≤j pij′x ℓ ij′ > β} ▷ Bad solutions w.r.t. β 11: A← A \B 12: end for 13: end procedure We defer the proof of Theorem 4.2 to the Supplementary material. 4.3 Learning k Predicted Weight Vectors We now turn to the question of showing how to learn k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk. Recall that there is an unknown distribution D over sets of n jobs from which we receive independent samples J1, J2, . . . , JS . Our goal is to show that we can efficiently learn (in terms of sample complexity) k predicted weight vectors to minimize the expected minimum error. Let w∗(J) be the correct weight vector for instance J and let η(w,w′) = maxi∈[m] max(wiw′i , w′i wi ) be the error between a pair of weight vectors. We have the following result. Theorem 4.4. Let D be an unknown distribution over restricted assignment instances on n jobs and let w∗(J) be a set of good weights for instance J . Given S independent samples from D, there is a polynomial time algorithm that outputs k weight vectors ŵ1, ŵ2, . . . , ŵk such that EJ∼D [ minℓ∈[k] log(η(ŵ ℓ, w∗(J)) ] ≤ O(1) · minw1,w2,...,wk E [ minℓ∈[k] log(η(w ℓ, w∗(J)) ] + ϵ with probability 1− δ, where S = poly(m, k, 1ϵ , 1 δ ) The proof of Theorem 4.4 is deferred to the Supplementary material, but we note that to get a polynomial time algorithm we carry out an interesting reduction to k-median clustering. Namely, we show that the function d(w,w′) := log(η(w,w′)) satisfies the triangle inequality and thus forms a metric space. 5 Scheduling with Predicted Permutations In this problem there are n jobs, indexed by 1, 2, . . . , n, to be scheduled on a single machine. We assume that they are all available at time 0. Job j has size pj and needs to get processed for pj time units to complete. If all job sizes are known a priori, Shortest Job First (or equivalently Shortest Remaining Time First), which processes jobs in non-decreasing order of their size, is known to be optimal for minimizing total completion time. We assume that the true value of pj is unknown and is revealed only when the job completes, i.e. the non-clairvoyant setting. In the non-clairvoyant setting, it is known that Round-Robin (which processes all alive jobs equally) is 2-competitive and that this is the best competitive ratio one can hope for [35]. We study this basic scheduling problem assuming certain predictions are available for use. Following the recent work by Lindermayr and Megow [30], we will assume that we are given k orderings/sequences as prediction, {σℓ}ℓ∈[k]. Each σℓ is a permutation of J := [n]. Intuitively, it suggests an ordering in which jobs should be processed. This prediction is inspired by the aforementioned Shortest Job First (SJF) as an optimal schedule can be described as an ordering of jobs, specifically increasing order of job sizes. For each σℓ, its error is measured as η(J, σℓ) := COST(J, σℓ)−OPT(J), where COST(J, σℓ) denotes the objective of the schedule where jobs are processed in the order of σℓ and OPT(J) denotes the optimal objective value. We may drop J from notation when it is clear from the context. As observed in [30], the error can be expressed as η(J, σℓ) = ∑ i<j∈J I ℓ i,j · |pi − pj |, where Iℓi,j is an indicator variable for ‘inversion’ that has value 1 if and only if σℓ predicts the pairwise ordering of i and j incorrectly. That is, if pi < pj , then the optimal schedule would process i before j; here Iℓi,j = 1 iff i ≻σℓ j. As discussed in [30], this error measure satisfies two desired properties, monotonicity and Lipschitzness, which were formalized in [24]. Our main result is the following. Theorem 5.1. Consider a constant ϵ > 0. Suppose that for any S ⊆ J with |S| = Θ( 1ϵ4 (log log n+ log k + log(1/ϵ))), we have OPT(S) ≤ cϵ · OPT(J) for some small absolute constant c. Then, there exists a randomized algorithm that yields a schedule whose expected total completion time is at most (1 + ϵ)OPT + (1 + ϵ) 1ϵ5 η(J, σℓ) for all ℓ ∈ [k]. As a corollary, by running our algorithm with 1− ϵ processing speed and simultaneously running Round-Robin with the remaining ϵ of the speed, the cost increases by a factor of at most 11−ϵ while the resulting hybrid algorithm is 2/ϵ-competitive.3 5.1 Algorithm To make our presentation more transparent we will first round job sizes. Formally, we choose ρ uniformly at random from [0, 1). Then, round up each job j’s size to the closest number of the form (1 + ϵ)ρ+t for some integer t. Then, we scale down all job sizes by (1 + ϵ)ρ factor. We will present our algorithm and analysis assuming that every job has a size equal to a power of (1 + ϵ). In the 3This hybrid algorithm is essentially the preferential time sharing [24, 30, 37]. Formally, we run our algorithm ignoring RR’s processing and also run RR ignoring our algorithm; this can be done by a simple simulation. Thus, we construct two schedules concurrently and each job completes at the time when it does in either schedule. This type of algorithms was first used in [37]. supplementary we show how to remove this assumption without increasing our algorithm’s objective by more than 1 + ϵ factor in expectation. We first present the following algorithm that achieves Theorem 5.1 with |S| = Θ( 1ϵ4 (log n+ log k)). The improved bound claimed in the theorem needs minor tweaks of the algorithm and analysis and they are deferred to the supplementary material. Our algorithm runs in rounds. Let Jr be the jobs that complete in round r ≥ 1. For any subset S of rounds, JS := ∪r∈SJr. For example, J≤r := J1 ∪ . . . ∪ Jr. Let nr := |J≥r| = n− |J<r| denote the number of alive jobs at the beginning of round r. Fix the beginning of round r. The algorithm processes the job in the following way for this round. If nr ≤ 1ϵ4 (log n+ log k), we run Round-Robin to complete all the remaining jobs, J≥r. This is the last round and it is denoted as round L+ 1. Otherwise, we do the following Steps 1-4: Step 1. Estimating ϵ-percentile. Roughly speaking, the goal is to estimate the ϵ-percentile of job sizes among the remaining jobs. For a job j ∈ J≥r, define its rank among J≥r as the number of jobs no smaller than j in J≥r breaking ties in an arbitrary yet fixed way. Ideally, we would like to estimate the size of job of rank ϵnr, but do so only approximately. The algorithm will find q̃r that is the size of a job whose rank lies in [ϵ(1 − ϵ)nr, ϵ(1 + ϵ)nr]. To handle the case that there are many jobs of the same size q̃r, we estimate yr the number of jobs no bigger than q̃r; let ỹr denote our estimate of yr. We will show how we can do these estimations without spending much time by sampling some jobs and partially processing them in Round-Robin manner (the proof of the following lemma can be found in the supplementary material.) Lemma 5.2. W.h.p. the algorithm can construct estimates q̃r and ỹr in time at most O(q̃r 1ϵ2 log n) such that there is a job of size q̃r whose rank lies in [ϵ(1− ϵ)nr, ϵ(1 + ϵ)nr] and |ỹr − yr| ≤ ϵ2nr. Step 2. Determining Good and Bad Sequences. Let σrℓ denote σℓ with all jobs completed in the previous rounds removed and with the relative ordering of the remaining jobs fixed. Let σrℓ,ϵ denote the first ỹr jobs in the ordering. We say a job j is big if pj > q̃r; middle if pj = q̃r; small otherwise. Using sampling and partial processing we will approximately distinguish good and bad sequences. Informally σrℓ is good if σ r ℓ,ϵ has few big jobs and bad if it does many big jobs. The proof of the following lemma can be found in the supplementary material. Lemma 5.3. For all ℓ ∈ [k], we can label sequence σrℓ either good or bad in time at most O(q̃r 1 ϵ2 (log n + log k)) that satisfies the following with high probability: If it is good, σ r ℓ,ϵ has at most 3ϵ2nr big jobs; otherwise σrℓ,ϵ has at least ϵ 2nr big jobs. Step 3. Job Processing. If all sequences are bad, then we process all jobs, each up to q̃r units in an arbitrary order. Otherwise, we process the first ỹr jobs in an arbitrary good sequence, in an arbitrary order, each up to q̃r units. Step 4. Updating Sequences. The jobs completed in this round drop from the sequences but the remaining jobs’ relative ordering remains fixed in each (sub-)sequence. For simplicity, we assume that partially processed jobs were never processed—this is without loss of generality as this assumption only increases our schedule’s objective. 5.2 Analysis of the Algorithm’s Performance We defer the analysis of the above algorithm (the proof of Theorem 5.1) to the supplementary material, as it is quite technical and complex. At a very high level, though, we use the fact that the error in each prediction can be decomposed into pair-wise inversions, and moreover we can partition the inversions into the rounds of the algorithm in which they appear. Then we look at each round, and split into two cases. First, if all sequences are bad then every prediction has large error, so we can simply use Round Robin (which is 2-competitive against OPT) and the cost can be charged to the error of any prediction. Second, if there is a good sequence, then in any good sequence the number of big jobs is small (so we do not spend much time processing them), and we therefore complete almost all of the non-big jobs. Here, we crucially use the fact that we can process the first ϵ fraction of jobs in a sequence in an arbitrary order remaining competitive against the sequence. Finally, we show that all of the additional assumptions and costs (e.g., rounding processing times and the cost due to sampling) only change our performance by a 1 + ϵ factor. Getting all of these details right requires much care. 5.3 Learning k Predicted Permutations Now we show that learning the best k permutations has polynomial sample complexity. Theorem 5.4. Let D be an unknown distribution of instances on n jobs. Given S independent samples from D, there is an algorithm that outputs k permutations σ̂1, σ̂2, . . . , σ̂k such that EJ∼D [ minℓ∈[k] η(J, σ̂ℓ) ] ≤ minσ1,σ2,...,σk EJ∼D [ minℓ∈[k] η(J, σℓ) ] + ϵ with probability 1 − δ, where S = poly(n, k, 1ϵ , 1 δ ). Proof. The algorithm is basic ERM, and the polynomial sample complexity follows from Theorem 2.1 and Theorem 20 in Lindermayr and Megow [30]. 6 Conclusion Despite the explosive recent work in algorithms with predictions, almost all of this work has assumed only a single prediction. In this paper we study algorithms with multiple machine-learned predictions, rather than just one. We study three different problems that have been well-studied in the single prediction setting but not with multiple predictions: faster algorithms for min-cost bipartite matching using learned duals, online load balancing with learned machine weights, and non-clairvoyant scheduling with order predictions. For all of the problems we design algorithms that can utilize multiple predictions, and show sample complexity bounds for learning the best set of k predictions. Demonstrating the effectiveness of our algorithms (and the broader use of multiple predictions) empirically is an interesting direction for further work. Surprisingly, we have shown that in some cases, using multiple predictions is essentially “free.” For instance, in the case of min-cost perfect matching examining k = O( √ n) predictions takes the same amount of time as one round of the Hungarian algorithm, but the number of rounds is determined by the quality of the best prediction. In contrast, for load balancing, using k predictions always incurs an O(log k) cost, so using a constant number of predictions may be best. More generally, studying this trade-off between the cost and the benefit of multiple predictions for other problems remains an interesting and challenging open problem. Acknowledgments and Disclosure of Funding Michael Dinitz was supported in part by NSF grant CCF-1909111. Sungjin Im was supported in part by NSF grants CCF-1617653, CCF-1844939 and CCF-2121745. Thomas Lavastida and Benjamin Moseley were supported in part by NSF grants CCF-1824303, CCF-1845146, CCF-2121744 and CMMI-1938909. Benjamin Moseley was additionally supported in part by a Google Research Award, an Infor Research Award, and a Carnegie Bosch Junior Faculty Chair.
1. What is the focus and contribution of the paper regarding online problems and statistical complexity? 2. What are the strengths of the proposed approach, particularly in terms of originality and algorithmic techniques? 3. Do you have any concerns or questions regarding the paper's quality, writing, and significance? 4. How does the reviewer assess the related work and compare it to the current paper's contributions? 5. What are the limitations of the paper, if any?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper attempts to examine solving certain online problems using a set of k predictions and examine the statistical complexity of obtaining a good algorithm that uses such k predictions. These problems include min-cost perfect matching, online load balancing and non-clairvoyant scheduling for total completion time. The task is to perform as well as the best prediction. Strengths And Weaknesses Originality: The paper’s work seems to be original. Using k predictions to supplement online algorithms has been explored by multiple works in the past. In this paper, it has been extended to 3 other problems. The algorithmic techniques for Section 3 and Section 4 are fairly standard and known. Exploring the learnability of using such k predictions seems to be a new undertaking. The algorithm for Section 5 also seems to be new work. Clarity: The non-technical part of the paper is clearly written and easy to understand. Due to paucity of time, I could not verify the technical aspects of the paper but the theorem statements “make sense” to me. I found that the authors motivated the use of multiple predictions well. Quality: The quality of results and writing is good. I did not check the math for the proofs in Section 5 but the algorithm seems intuitive. The paper is quite theoretical so having experiments/simulations is not warranted but would have been nice to see the proposed algorithm in Section 5 in action (even against synthetic datasets) Significance: There is a plethora of work currently in Learning augmented algorithms. I find this work to be incremental in the field of using multiple predictions for Online Algorithms. The significance of the work is relevant since algorithms with predictions as a general field has gained so much traction in the community. Questions Related Work: The related work is quite comprehensive. One of the related works that I found that the authors could contrast their work vis-a-vis learnability is : Anand, Keerti, et al. "A regression approach to learning-augmented online algorithms." Advances in Neural Information Processing Systems 34 (2021): 30504-30517. This is because in this work Anand et al did not consider the computational complexity of the optimization procedure that obtains the sample error minimizer. In comparison, the authors in this paper give a technique that is a polynomial time approximation algorithm for obtaining the predictions from the sample set that are O(1) within the best predictions obtainable from ERM. Line 290: In this proof of Thm 3.4, won’t the algorithm need to look at all |S| choose k candidate solutions (and that would be exponential in k ). The claim is that “we can find an O(1)-approximate solution which is of polynomial size in polynomial time”. What exactly in the time complexity and what is the procedure? Please clarify. Limitations None.
NIPS
Title Algorithms with Prediction Portfolios Abstract The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick the best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm’s performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance. 1 Introduction An exciting recent line of research attempts to go beyond traditional worst-case analysis of algorithms by equipping algorithms with machine-learned predictions. The hope is that these predictions allow the algorithm to circumvent worst case lower bounds when the predictions are good, and approximately match them otherwise. The precise definitions and guarantees vary with different settings, but there have been significant successes in applying this framework for many different algorithmic problems, ranging from general online problems to classical graph algorithms (see Section 1.2 for a more detailed discussion of related work, and [33] for a survey). In all of these settings it turns out to be possible to define a “prediction” where the “quality” of the algorithm (competitive ratio, running time, etc.) depends the “error” of the prediction. Moreover, in at least some of these settings, it has been further shown that this prediction is actually learnable with a small number of samples, usually via standard ERM methods [18]. Previous work has shown the power of accurate predictions, and there are numerous examples showing improved performance in both theory and practice. However, developing accurate predictors remains an art, and a single predictor may not capture all of the subtleties of the instance space. Recently, researchers have turned to working with portfolios of predictors: instead of training a single model, train multiple models, with the hope that one of them will give good guarantees. ∗Work was done while the author was at Carnegie Mellon University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). It is easy to see why the best predictor in a portfolio may be significantly better than a one-size fits all predictor. First, many of the modern machine learning methods come with a slew of hyperparameters that require tuning. Learning rate, mini-batch size, optimizer choice, all of these have significant impact on the quality of the final solution. Instead of commiting to a single setting, one can instead try to cover the parameter space, with the hope that some of the predictors will generalize better than others. Second, problem instances themselves may come from complex distributions, consisting of many latent groups or clusters. A single predictor is forced to perform well on average, whereas multiple predictors can be made to “specialize” to each cluster. In order to take advantage of the increased accuracy provided by the portfolio approach, we must adapt algorithms with predictions to take advantage of multiple predictions. To capture the gains in performance, the algorithm must perform as if equipped with the best predictor, auto-tuning to use the best one available in the portfolio. However, it is easy to see that there should be a cost as the size of the portfolio grows. In the extreme, one can add every possible prediction to the portfolio, providing no additional information, yet now requiring high performance from the algorithm. Therefore, we must aim to minimize the dependence on the number of predictions in the portfolio. We remark that the high level set up may be reminiscent of expert- or bandit-learning literature. However, there is a critical distinction. In expert and bandit learning, we are given a sequence of problem instances, and the goal is to compete (minimize regret) with respect to the best prediction averaged over the whole sequence. On the other hand, in our setup, we aim to compete with the best predictor on a per-instance basis. Previous work on multiple predictions. Bhaskara et al. studied an online linear optimization problem where the learner seeks to minimize the regret, provided access to multiple hints [16]. Inspired by the work, Anand et al. recently studied algorithms with multiple learned predictions in [7], proving strong bounds for important online covering problems including online set cover, weighted caching, and online facility location. It was a significant extension of the work [22] which studied the rent-or-buy problem with access to two predictions. However, their techniques and results are limited to online covering problems. Moreover, they do not discuss the learning aspects at all: they simply assume that they are given k predictions, and their goal is to have competitive ratios that are based on the minimum error of any of the k predictions. (They actually compete against a stronger dynamic benchmark, but for our purposes this distinction is not important.) On the other hand Balcan et al. [14] look at this problem through a data driven algorithm lens and study the sample complexity and generalization error of working with k (as opposed to 1) parameter settings. The main difference from our work is that they also aim learn a selector, which selects one of the k parameters prior to beginning to solve the problem instance. In contrast, in this work we make the selection during the course of the algorithm, and sometimes switch back and forth while honing in on the best predictor. 1.1 Our Results and Contributions In this paper we study three fundamental problems, min-cost perfect matching, online load balancing, and non-clairvoyant scheduling for total completion time, in this new setting. Each of these has seen significant success in the single-prediction model but is not covered by previous multiple-prediction frameworks. Our results are primarily theoretical, however we have included a preliminary empirical validation of our algorithm for min-cost perfect matching in the supplementary material. For each of these we develop algorithms whose performance depends on the error of the best prediction, and explore the effect of the number of predictions, k. Surprisingly, in the case of matching and scheduling we show that using a limited number of predictions is essentially free, and has no asymptotic impact on the algorithm’s performance. For load balancing, on the other hand, we show that the cost of multiple predictions grows logarithmically with k, again implying a tangible benefit of using multiple predictions. We now describe these in more detail. Min-Cost Perfect Matching. We begin by showcasing our approach with the classical min-cost perfect matching problem in Section 3. This problem was recently studied by [17, 18] to show that it is possible to use learned predictions to improve running times of classical optimization problems. In particular, [18] showed it is possible to speed up the classical Hungarian algorithm by predicting dual values, and moreover that it is possible to efficiently (PAC-)learn the best duals. We show that simple modifications of their ideas lead to similar results for multiple predictions. Interestingly, we show that as long as k ≤ O( √ n), the extra “cost” (running time) of using k predictions is negligible compared to the cost of using a single prediction, so we can use up to √ n predictions “for free” while still getting running time depending on the best of these predictions. Moreover, since in this setting running time is paramount, we go beyond sample complexity to show that it is also computationally efficient to learn the best k predictions. Online Load Balancing with Restricted Assignments. We continue in Section 4 with the fundamental load balancing problem. In this problem there are m machines, and n jobs which appear in online fashion. Each job has a size, and a subset of machines that it can be assigned to. The goal is to minimize the maximum machine load (i.e., the makespan). This problem has been studied extensively in the traditional scheduling and online algorithms literature, and recently it has also been the subject of significant study given a single prediction [26–28]. In particular, Lattanzi, Lavastida, Moseley, and Vassilvitskii [26] showed that there exist per machine “weights” and an allocation function so that the competitive ratio of the algorithm depends logarithmically on the maximum error of the predictions. We show that one can use k predictions and incur an additional O(log k) factor in the competitive ratio, while being competitive with the error of the best prediction. Additionally, we show that learning the best k predicted weights (in a PAC sense) can be done efficiently. Non Clairvoyant Scheduling Finally, in Section 5 we move to the most technically complex part of this paper. We study the problem of scheduling n jobs on a single machine, where all jobs are released at time 0, but where we do not learn the length of a job until it actually completes (the non-clairvoyant model). Our objective is to minimize the sum of completion times. This problem has been studied extensively, both with and without predictions [24, 30, 35, 37]. Most recently, Lindermayr and Megow [30] suggested that we use an ordering as the prediction (as opposed to the more obvious prediction of job sizes), and use the difference between the cost induced by the predicted ordering and the cost induced by the instance-optimal ordering as the notion of “error”. In this case, simply following the predicted ordering yields an algorithm with error equal to the prediction error. We extend this to the multiple prediction setting, which turns out to be surprisingly challenging. The algorithm of [30] is quite simple: follow the ordering given by the prediction (and run a 2-competitive algorithm in parallel to obtain a worst-case backstop). But we obviously cannot do this when we are given multiple orderings! So we must design an algorithm which considers all k predictions to build a schedule that has error comparable to the error of the best one. Slightly more formally, we prove that we can bound the sum of completion times by (1 + ϵ)OPT plus poly(1/ϵ) times the error of the best prediction, under the mild assumption that no set of at most log log n jobs has a large contribution to OPT. To do this, we first use sampling techniques similar to those of [24] to estimate the size of the approximately ϵn’th smallest job without incurring much cost. We then use even more sampling and partial processing to determine for each prediction whether its ϵn prefix has many jobs that should appear later (a bad sequence) or has very few jobs that should not be in the prefix (a good sequence). If all sequences are bad then every prediction has large error, so we can use a round robin schedule and charge the cost to the prediction error. Otherwise, we choose one of the good orderings and follow it for its ϵn prefix (being careful to handle outliers). We then recurse on the remaining jobs. 1.2 Related Work As discussed, the most directly related papers are Anand et al. [7] and Balcan, Sandholm, and Vitercik [14]; these give the two approaches (multiple predictions and portfolio-based algorithm selection) that are most similar to our setting. The single prediction version of min-cost bipartite matching was studied in [17, 18], the single prediction version of our load balancing problem was considered by [26–28] (and a different though related load balancing problem was considered by [4]), and the single prediction version of our scheduling problem was considered by [30] with the same prediction that we use (an ordering) and earlier with different predictions by [24, 37, 39]. Online scheduling with estimates of the true processing times was considered in [11, 12]. More generally, there has been an enormous amount of recent progress on algorithms with predictions. This is particularly true for online algorithms, where the basic setup was formalized by [31] in the context of caching. For example, the problems considered include caching [25, 31, 38], secretary problems [9, 20], ski rental [5, 37, 39], and set cover [15]. There has also been recent work on going beyond traditional online algorithms, including work on running times [17, 18], algorithmic game theory [2, 21, 32], and streaming algorithms [1, 19, 23]. The learnability of predictions for online algorithms with predictions was considered by [6]. They give a novel loss function tailored to their specific online algorithm and prediction, and study the sample complexity of learning a mapping from problem features to a prediction. While they are only concerned with the sample complexity of the learning problem, we also consider the computational complexity, giving polynomial time O(1)-approximate algorithms for the learning problems associated with min-cost matching and online load balancing. The above is only a small sample of the work on algorithms with predictions. We refer the interested reader to a recent survey [33], as well as a recently set up website which maintains a list of papers in the area [29]. 2 Learnability When designing new methods in the algorithms with predictions setting, the predictions under consideration must satisfy two constraints. First, they should be useful to the algorithm, so that using the predictions allows the algorithm to achieve better running time, competitive ratio, or some other performance measure. Second, they must be learnable: it must be feasible to find good predictions given a set of problem instances. To rigorously prove learnability, we follow previous work [13, 18, 34] and focus on proving a bound on the sample complexity of finding the best predictions that generalize. Our main result shows that for a given problem, the pseudo-dimension of finding k predictions is Õ(k)2 factor larger than that for finding a single best predictor. We state the formal Theorem below, but defer the proof to the supplementary material. Theorem 2.1. Let F be a class of functions f : X → R with pseudo-dimension d and let Fk := {F (x) = minℓ∈[k] f ℓ(x) | f1, f2, . . . , fk ∈ F}. Then the pseudo-dimension of Fk is at most Õ(dk). Note that this directly implies that the sample complexity when looking for k predictions is a factor of k larger than that of a single predictor by the following well-known theorem. Theorem 2.2. [8, 34, 36] Let D be a distribution over a domain X and F be a class of functions f : X → [0, H] with pseudo-dimension dF . Consider S independent samples x1, x2, . . . , xS from D. There is a universal constant c0, such that for any ϵ > 0 and δ ∈ (0, 1), if S ≥ c0 ( H ϵ )2 (dF+ln(1/δ)) then we have ∣∣∣∣∣1s S∑ s=1 f(xi)− Ex∼D[f(x)] ∣∣∣∣∣ ≤ ϵ for all f ∈ F with probability at least 1− δ. 3 Minimum Cost Bipartite Matching with Predicted Duals In this section we study the minimum cost bipartite matching problem with multiple predictions. The case of a single prediction has been considered recently [17, 18], where they used dual values as a prediction and showed that the classical Hungarian algorithm could be sped up by using appropriately learned dual values. Our goal in this section is to extend these results to multiple predictions, i.e., multiple duals. In particular, in Section 3.2 we show that we can use k duals and get running time comparable to the time we would have spent if we used the single best of them in the algorithm of [18], with no asymptotic loss if k is at most O( √ n). Then in Section 3.3 we show that k predictions can be learned with not too many more samples (or running time) than learning a single prediction. 2Õ(·) suppresses logarithmic factors 3.1 Problem Definition and Predicted Dual Variables In the minimum cost bipartite matching problem we are given a bipartite graph G = (V,E) with n = |V | vertices and m = |E| edges, with edge costs c ∈ ZE . The objective is to output a perfect matching M ⊆ E which minimizes the cost c(M) := ∑ e∈E ce. This problem is exactly captured by the following primal and dual linear programming formulations. min ∑ e∈E cexe (MWPM-P)∑ e∈N(i) xe = 1 ∀i ∈ V xe ≥ 0 ∀e ∈ E max ∑ i∈V yi (MWPM-D) yi + yj ≤ ce ∀e = ij ∈ E Dinitz et al. [18] studied initializing the Hungarian algorithm with a prediction ŷ of the optimal dual solution y∗. They propose an algorithm which operates in two steps. First, the predicted dual solution ŷ may not be feasible, so they give an O(n+m) time algorithm which recovers feasibility (which we refer to as Make-Feasible). Second, the now-feasible dual solution is used in a primal-dual algorithm such as the Hungarian algorithm (which we refer to as Primal-Dual) and they show that the running time depends on the ℓ1 error in the predicted solution. In addition to this they show that learning a good initial dual solution is computationally efficient with low sample complexity. More formally, they proved the following theorems. Theorem 3.1 (Dinitz et al. [18]). Let (G, c) be an instance of minimum cost bipartite matching and ŷ be a prediction of an optimal dual solution y∗. There exists an algorithm which returns an optimal solution and runs in time O(m √ n · ∥y∗ − ŷ∥1). Theorem 3.2 (Dinitz et al. [18]). Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs a solution ŷ such that E(G,c)∼D [∥y∗(G, c)− ŷ∥1] ≤ min y E(G,c)∼D [∥y∗(G, c)− y∥1] + ϵ with probability 1− δ where S = poly(n, 1ϵ , 1 δ ). 3.2 Using k Predicted Dual Solutions Efficiently Given k predicted dual solutions ŷ1, ŷ2, . . . , ŷk, we would like to efficiently determine which solution has the minimum error for the given problem instance. Note that the predicted solutions may still be infeasible and that we do not know the target optimal dual solution y∗. We propose the following simple algorithm which takes as input k predicted solutions and whose running time depends only on the ℓ1 error of the best predicted solution. First, we make each predicted solution feasible, just as before. Next, we select the (now-feasible) dual solution with highest dual objective value and proceed running the primal-dual algorithm with only that solution. See Algorithm 1 for pseudo-code. Algorithm 1 Minimum cost matching with k predicted dual solutions 1: procedure k-PREDICTEDPRIMAL-DUAL(G, c, ŷ1, ŷ2, . . . , ŷk) 2: for ℓ ∈ [k] do 3: yℓ ←MakeFeasible(G, c, ŷℓ) 4: end for 5: ℓ′ ← argmaxℓ∈[k] ∑ i∈V y ℓ i 6: M ←Primal-Dual(G, c, yℓ′) 7: Return M 8: end procedure We have the following result concerning Algorithm 1. To interpret this result, note that the cost for increasing the number of predictions is O(k(n+m)), which will be dominated by the m √ n term we pay for running the Hungarian algorithm unless k is extremely large (certainly larger than √ n) or there is a prediction with 0 error (which is highly unlikely). Hence we can reap the benefit of a large number of predictions “for free”. Theorem 3.3. Let (G, c) be a minimum cost bipartite matching instance and let ŷ1, ŷ2, . . . , ŷk be predicted dual solutions. Algorithm 1 returns an optimal solution and runs in time O(k(n+m) + m √ n ·minℓ∈[k] ∥y∗ − ŷℓ∥1). We defer the proof to the supplementary material. But correctness is essentially direct from [18], and the running time requires just a simple modification of the analysis of [18]. 3.3 Learning k Predicted Dual Solutions Next we extend Theorem 3.2 to the setting where we output k predictions. Let D be a distribution over problem instances (G, c) on n vertices. We show that we can find the best set of k predictions. More formally, we prove the following theorem. Theorem 3.4. Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs k solutions ŷ1, ŷ2, . . . , ŷk such that E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− ŷℓ∥1 ] ≤ O(1) · min y1,y2,...,yk E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− yℓ∥1 ] + ϵ with probability 1− δ where S = poly(n, k, 1ϵ , 1 δ ). The proof of this theorem can be found in the supplementary material, but it is straightforward. The sample complexity is due to combining Theorem 2.1 with Theorem 3.2 (or more precisely, with the pseudo-dimension bound which implies Theorem 3.2). The O(1)-approximation factor and polynomial running time is from the observation that the ERM problem in this setting is just an instance of the k-median clustering problem. 4 Online Load Balancing with Predicted Machine Weights We now apply our framework to online load balancing with restricted assignments. In particular, we consider proportional weights, which have been considered in prior work [26–28]. Informally, we show in Section 4.2 that if β is the cost of the best of the k predictions, then even without knowing a priori which prediction is best, we get cost of O(β log k). Then in Section 4.3 we show that it does not take many samples to actually learn the best k predictions. 4.1 Problem Definition and Proportional Weights In online load balancing with restricted assignments there is a sequence of n jobs which must be assigned to m machines in an online fashion. Upon seeing job j, the online algorithm observes its size pj > 0 and a neighborhood N(j) ⊆ [m] of feasible machines. The algorithm must then choose some feasible machine i ∈ N(j) to irrevocably assign the job to before seeing any more jobs in the sequence. We also consider fractional assignments, i.e. vectors belonging to the set X = {x ∈ Rm×n+ | ∀j ∈ [n], ∑ i xij = 1, and xij = 0 ⇐⇒ i /∈ N(j)}. Prior work studied the application of proportional weights[3, 26–28]. Intuitively, a prediction in this setting is a weighting of machines, which then implies an online assignment, which is shown to be near-optimal. Slightly more formally, suppose that we are given weights wi for each machine i. Then each job j is fractionally assigned to machine i to a fractional amount of wi∑ i′∈N(j) wi′ . Notice that given weights, this also gives an online assignment. It is known that there exist weights for any instance where the fractional solution has a near optimal makespan, even though there are only m “degree of freedom” in the weights compared to mn in an assignment. That is, for all machines i,∑ j∈[n] pj · wi∑ i′∈N(j) wi′ is at most a (1 + ϵ) factor larger than the optimal makespan for any constant ϵ > 0 [3, 26]. Let w∗ be a set of near optimal weights for a given instance. Lattanzi et al. [26] showed the following theorem: Theorem 4.1. Given predicted weights ŵ, there is an online fractional algorithm which has makespan O(log(η(ŵ, w∗)OPT), where η(ŵ, w∗) := maxi∈[m] max( ŵiw∗i , w∗i ŵi ) to be the error in the prediction. Moreover, this fractional assignment can be converted online to an integral assignment while losing only an O(log logm) factor in the makespan [26, 28]. Thus, we focus on constructing fractional assignments that are competitive with the best prediction in hindsight. 4.2 Combining Fractional Solutions Online Given k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk, we want to give an algorithm which is competitive against the minimum error among the predicted weights, i.e. we want the competitiveness to depend upon ηmin := minℓ∈[k] η(ŵℓ, w∗). The challenge is that we do not know up front which ℓ ∈ [k] will yield the smallest error, but instead learn this in hindsight. For each ℓ ∈ [k], let xℓ, be the resulting fractional assignment from applying the fractional online algorithm due to [26] with weights ŵℓ. This fractional assignment is revealed one job at a time. We give an algorithm which is O(log k)-competitive against any collection of k fractional assignments which are revealed online. Moreover, our result applies to the unrelated machines setting, in which each job has a collection of machine-dependent sizes {pij}i∈[m]. The algorithm is based on the doubling trick and is similar to results in [10] which apply to metrical task systems. Let β := minℓ∈[k] maxi ∑ j pijx ℓ ij be the best fractional makespan in hindsight. As in previous work, our algorithm is assumed to know β, an assumption that can be removed [26]. At a high level, our algorithm maintains a set A ⊆ [k] of solutions which are good with respect to the current value of β, averaging among these. See Algorithm 2 for a detailed description. We have the following theorem. Theorem 4.2. Let x1, x2, . . . , xk be fractional assignments which are revealed online. If Algorithm 2 is run with β := minℓ∈[k] maxi ∑ j pijx ℓ ij , then it yields a solution of cost O(log k) · β and never reaches the fail state (line 7 in Algorithm 2). Let βℓ = maxi ∑ j pijx ℓ ij and OPT be the optimal makespan. Theorem 4.1 shows that βℓ ≤ O(log ηℓ)OPT. The following corollary is then immediate: Corollary 4.3. Let w1, w2, . . . , wk be the predicted weights with errors η1, η2, . . . , ηk. Then Algorithm 2 returns a fractional assignment with makespan at most OPT ·O(log k) ·minℓ∈[k] log(ηℓ). Algorithm 2 Algorithm for combining fractional solutions online for load balancing. 1: procedure COMBINE-LOADBALANCING(β) 2: A← [k] ▷ Initially all solutions are good 3: for each job j do 4: Receive the assignments x1, x2, . . . , xk 5: A(j, β)← {ℓ ∈ A | ∀i ∈ [m], xℓij > 0 =⇒ pijxℓij ≤ β} 6: if A = ∅ or A(j, β) = ∅ then 7: Return “Fail” 8: end if 9: ∀i ∈ [m], xij ← 1|A(j,β)| ∑ ℓ∈A(j,β) x ℓ ij 10: B ← {ℓ ∈ A | maxi∈[m] ∑ j′≤j pij′x ℓ ij′ > β} ▷ Bad solutions w.r.t. β 11: A← A \B 12: end for 13: end procedure We defer the proof of Theorem 4.2 to the Supplementary material. 4.3 Learning k Predicted Weight Vectors We now turn to the question of showing how to learn k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk. Recall that there is an unknown distribution D over sets of n jobs from which we receive independent samples J1, J2, . . . , JS . Our goal is to show that we can efficiently learn (in terms of sample complexity) k predicted weight vectors to minimize the expected minimum error. Let w∗(J) be the correct weight vector for instance J and let η(w,w′) = maxi∈[m] max(wiw′i , w′i wi ) be the error between a pair of weight vectors. We have the following result. Theorem 4.4. Let D be an unknown distribution over restricted assignment instances on n jobs and let w∗(J) be a set of good weights for instance J . Given S independent samples from D, there is a polynomial time algorithm that outputs k weight vectors ŵ1, ŵ2, . . . , ŵk such that EJ∼D [ minℓ∈[k] log(η(ŵ ℓ, w∗(J)) ] ≤ O(1) · minw1,w2,...,wk E [ minℓ∈[k] log(η(w ℓ, w∗(J)) ] + ϵ with probability 1− δ, where S = poly(m, k, 1ϵ , 1 δ ) The proof of Theorem 4.4 is deferred to the Supplementary material, but we note that to get a polynomial time algorithm we carry out an interesting reduction to k-median clustering. Namely, we show that the function d(w,w′) := log(η(w,w′)) satisfies the triangle inequality and thus forms a metric space. 5 Scheduling with Predicted Permutations In this problem there are n jobs, indexed by 1, 2, . . . , n, to be scheduled on a single machine. We assume that they are all available at time 0. Job j has size pj and needs to get processed for pj time units to complete. If all job sizes are known a priori, Shortest Job First (or equivalently Shortest Remaining Time First), which processes jobs in non-decreasing order of their size, is known to be optimal for minimizing total completion time. We assume that the true value of pj is unknown and is revealed only when the job completes, i.e. the non-clairvoyant setting. In the non-clairvoyant setting, it is known that Round-Robin (which processes all alive jobs equally) is 2-competitive and that this is the best competitive ratio one can hope for [35]. We study this basic scheduling problem assuming certain predictions are available for use. Following the recent work by Lindermayr and Megow [30], we will assume that we are given k orderings/sequences as prediction, {σℓ}ℓ∈[k]. Each σℓ is a permutation of J := [n]. Intuitively, it suggests an ordering in which jobs should be processed. This prediction is inspired by the aforementioned Shortest Job First (SJF) as an optimal schedule can be described as an ordering of jobs, specifically increasing order of job sizes. For each σℓ, its error is measured as η(J, σℓ) := COST(J, σℓ)−OPT(J), where COST(J, σℓ) denotes the objective of the schedule where jobs are processed in the order of σℓ and OPT(J) denotes the optimal objective value. We may drop J from notation when it is clear from the context. As observed in [30], the error can be expressed as η(J, σℓ) = ∑ i<j∈J I ℓ i,j · |pi − pj |, where Iℓi,j is an indicator variable for ‘inversion’ that has value 1 if and only if σℓ predicts the pairwise ordering of i and j incorrectly. That is, if pi < pj , then the optimal schedule would process i before j; here Iℓi,j = 1 iff i ≻σℓ j. As discussed in [30], this error measure satisfies two desired properties, monotonicity and Lipschitzness, which were formalized in [24]. Our main result is the following. Theorem 5.1. Consider a constant ϵ > 0. Suppose that for any S ⊆ J with |S| = Θ( 1ϵ4 (log log n+ log k + log(1/ϵ))), we have OPT(S) ≤ cϵ · OPT(J) for some small absolute constant c. Then, there exists a randomized algorithm that yields a schedule whose expected total completion time is at most (1 + ϵ)OPT + (1 + ϵ) 1ϵ5 η(J, σℓ) for all ℓ ∈ [k]. As a corollary, by running our algorithm with 1− ϵ processing speed and simultaneously running Round-Robin with the remaining ϵ of the speed, the cost increases by a factor of at most 11−ϵ while the resulting hybrid algorithm is 2/ϵ-competitive.3 5.1 Algorithm To make our presentation more transparent we will first round job sizes. Formally, we choose ρ uniformly at random from [0, 1). Then, round up each job j’s size to the closest number of the form (1 + ϵ)ρ+t for some integer t. Then, we scale down all job sizes by (1 + ϵ)ρ factor. We will present our algorithm and analysis assuming that every job has a size equal to a power of (1 + ϵ). In the 3This hybrid algorithm is essentially the preferential time sharing [24, 30, 37]. Formally, we run our algorithm ignoring RR’s processing and also run RR ignoring our algorithm; this can be done by a simple simulation. Thus, we construct two schedules concurrently and each job completes at the time when it does in either schedule. This type of algorithms was first used in [37]. supplementary we show how to remove this assumption without increasing our algorithm’s objective by more than 1 + ϵ factor in expectation. We first present the following algorithm that achieves Theorem 5.1 with |S| = Θ( 1ϵ4 (log n+ log k)). The improved bound claimed in the theorem needs minor tweaks of the algorithm and analysis and they are deferred to the supplementary material. Our algorithm runs in rounds. Let Jr be the jobs that complete in round r ≥ 1. For any subset S of rounds, JS := ∪r∈SJr. For example, J≤r := J1 ∪ . . . ∪ Jr. Let nr := |J≥r| = n− |J<r| denote the number of alive jobs at the beginning of round r. Fix the beginning of round r. The algorithm processes the job in the following way for this round. If nr ≤ 1ϵ4 (log n+ log k), we run Round-Robin to complete all the remaining jobs, J≥r. This is the last round and it is denoted as round L+ 1. Otherwise, we do the following Steps 1-4: Step 1. Estimating ϵ-percentile. Roughly speaking, the goal is to estimate the ϵ-percentile of job sizes among the remaining jobs. For a job j ∈ J≥r, define its rank among J≥r as the number of jobs no smaller than j in J≥r breaking ties in an arbitrary yet fixed way. Ideally, we would like to estimate the size of job of rank ϵnr, but do so only approximately. The algorithm will find q̃r that is the size of a job whose rank lies in [ϵ(1 − ϵ)nr, ϵ(1 + ϵ)nr]. To handle the case that there are many jobs of the same size q̃r, we estimate yr the number of jobs no bigger than q̃r; let ỹr denote our estimate of yr. We will show how we can do these estimations without spending much time by sampling some jobs and partially processing them in Round-Robin manner (the proof of the following lemma can be found in the supplementary material.) Lemma 5.2. W.h.p. the algorithm can construct estimates q̃r and ỹr in time at most O(q̃r 1ϵ2 log n) such that there is a job of size q̃r whose rank lies in [ϵ(1− ϵ)nr, ϵ(1 + ϵ)nr] and |ỹr − yr| ≤ ϵ2nr. Step 2. Determining Good and Bad Sequences. Let σrℓ denote σℓ with all jobs completed in the previous rounds removed and with the relative ordering of the remaining jobs fixed. Let σrℓ,ϵ denote the first ỹr jobs in the ordering. We say a job j is big if pj > q̃r; middle if pj = q̃r; small otherwise. Using sampling and partial processing we will approximately distinguish good and bad sequences. Informally σrℓ is good if σ r ℓ,ϵ has few big jobs and bad if it does many big jobs. The proof of the following lemma can be found in the supplementary material. Lemma 5.3. For all ℓ ∈ [k], we can label sequence σrℓ either good or bad in time at most O(q̃r 1 ϵ2 (log n + log k)) that satisfies the following with high probability: If it is good, σ r ℓ,ϵ has at most 3ϵ2nr big jobs; otherwise σrℓ,ϵ has at least ϵ 2nr big jobs. Step 3. Job Processing. If all sequences are bad, then we process all jobs, each up to q̃r units in an arbitrary order. Otherwise, we process the first ỹr jobs in an arbitrary good sequence, in an arbitrary order, each up to q̃r units. Step 4. Updating Sequences. The jobs completed in this round drop from the sequences but the remaining jobs’ relative ordering remains fixed in each (sub-)sequence. For simplicity, we assume that partially processed jobs were never processed—this is without loss of generality as this assumption only increases our schedule’s objective. 5.2 Analysis of the Algorithm’s Performance We defer the analysis of the above algorithm (the proof of Theorem 5.1) to the supplementary material, as it is quite technical and complex. At a very high level, though, we use the fact that the error in each prediction can be decomposed into pair-wise inversions, and moreover we can partition the inversions into the rounds of the algorithm in which they appear. Then we look at each round, and split into two cases. First, if all sequences are bad then every prediction has large error, so we can simply use Round Robin (which is 2-competitive against OPT) and the cost can be charged to the error of any prediction. Second, if there is a good sequence, then in any good sequence the number of big jobs is small (so we do not spend much time processing them), and we therefore complete almost all of the non-big jobs. Here, we crucially use the fact that we can process the first ϵ fraction of jobs in a sequence in an arbitrary order remaining competitive against the sequence. Finally, we show that all of the additional assumptions and costs (e.g., rounding processing times and the cost due to sampling) only change our performance by a 1 + ϵ factor. Getting all of these details right requires much care. 5.3 Learning k Predicted Permutations Now we show that learning the best k permutations has polynomial sample complexity. Theorem 5.4. Let D be an unknown distribution of instances on n jobs. Given S independent samples from D, there is an algorithm that outputs k permutations σ̂1, σ̂2, . . . , σ̂k such that EJ∼D [ minℓ∈[k] η(J, σ̂ℓ) ] ≤ minσ1,σ2,...,σk EJ∼D [ minℓ∈[k] η(J, σℓ) ] + ϵ with probability 1 − δ, where S = poly(n, k, 1ϵ , 1 δ ). Proof. The algorithm is basic ERM, and the polynomial sample complexity follows from Theorem 2.1 and Theorem 20 in Lindermayr and Megow [30]. 6 Conclusion Despite the explosive recent work in algorithms with predictions, almost all of this work has assumed only a single prediction. In this paper we study algorithms with multiple machine-learned predictions, rather than just one. We study three different problems that have been well-studied in the single prediction setting but not with multiple predictions: faster algorithms for min-cost bipartite matching using learned duals, online load balancing with learned machine weights, and non-clairvoyant scheduling with order predictions. For all of the problems we design algorithms that can utilize multiple predictions, and show sample complexity bounds for learning the best set of k predictions. Demonstrating the effectiveness of our algorithms (and the broader use of multiple predictions) empirically is an interesting direction for further work. Surprisingly, we have shown that in some cases, using multiple predictions is essentially “free.” For instance, in the case of min-cost perfect matching examining k = O( √ n) predictions takes the same amount of time as one round of the Hungarian algorithm, but the number of rounds is determined by the quality of the best prediction. In contrast, for load balancing, using k predictions always incurs an O(log k) cost, so using a constant number of predictions may be best. More generally, studying this trade-off between the cost and the benefit of multiple predictions for other problems remains an interesting and challenging open problem. Acknowledgments and Disclosure of Funding Michael Dinitz was supported in part by NSF grant CCF-1909111. Sungjin Im was supported in part by NSF grants CCF-1617653, CCF-1844939 and CCF-2121745. Thomas Lavastida and Benjamin Moseley were supported in part by NSF grants CCF-1824303, CCF-1845146, CCF-2121744 and CMMI-1938909. Benjamin Moseley was additionally supported in part by a Google Research Award, an Infor Research Award, and a Carnegie Bosch Junior Faculty Chair.
1. What are the main contributions and strengths of the paper regarding the use of multiple machine-learned predictions for algorithm design? 2. What are the weaknesses and limitations of the paper compared to prior works like Anand et al and Lattanzi et al? 3. How does the reviewer assess the technical level of the results in the paper, particularly for the matching and scheduling problems? 4. Are there any minor errors or missing references in the paper that the reviewer noticed? 5. What are some potential future directions for research on multiple predictions in learning-based algorithms, such as exploring offline graph algorithms or addressing rounding fractional solutions?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper considers using multiple machine-learned predictions to improve classic algorithm design. In particular, the work is focused on three problem: min-cost bipartite matching, online makespan minimization, and non-clairvoyant scheduling. For the first problem, the goal is to improve run-time, whereas for the last two the goal is to improve competitive ratio. In each setting, the paper gives an efficient algorithms that makes use of multiple predictions. When (at least one of) the predictions are good, the resulting learning-augmented algorithm can achieve better performance than classic, worst-case counterparts. Moreover, the paper works out the learnability conditions, showing that good predictions can be efficiently PAC learned from data. Strengths And Weaknesses Strengths The paper is generally well-written, though with a few minor typos that may be confusing at times. I have listed them in a later section. The results are novel in light of the recent literature on learning-based algorithms. In particular, this can be seen as a good follow-up on the question of algorithms with multiple predictions, recently initiated by Anand, Ge, Kumar, and Panigrahi ["Online algorithms with multiple predictions" (ICML 22)]. As the author(s) point out, Anand et al focus only on a set of online covering problems. The results of this paper do not lie within this framework. I believe this paper is a good addition to this line of work. The theoretic claims are correct. Weaknesses In my view, some of the results in this work are a bit weak on a technical level. In particular, Section 3 on matching extends Dinitz et al in a fairly trivial fashion. The main algorithm (Algorithm 1) essentially says: take the largest dual prediction and run Dinitz et al single-predictor algorithm. This should be thought as a simple observation. On the other hand, the algorithmic result of Section 4 on scheduling relies on several key insights of Lattanzi et al ("Online Scheduling via Learned Weights", SODA 20). The learnability generally follows from a pseudodimension + PAC learning argument, again from Dinitz et al. Compared with Anand et al, this paper does not provide a general framework. Rather, it addresses 3 separate problems of different nature. That is, matching is offline and the goal is to improve run-time, but the scheduling problems are online and the goal is to deal with the uncertainty. Compared with Dinitz et al, the paper does not provide experiments. Compared with Lattanzi et al, the paper does not address rounding fractional solutions. Minor errors and missing references Line 11: “which prediction is [the] best” Line 104: another paper that studies non-clairvoyant scheduling in single-predictor setting is “Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms” by Wei & Zhang (NeurIPS 20) Line 249 reads a bit awkward, which is not making a definition. Maybe remove the “we say that”. LHS of Line 10 of Algorithm 2 should be S ( j , β ) ? I suggest the author(s) avoid overloading the notation of A here. LHS of Line 10 of Algorithm 2 should be B instead of A . Line 314: Optimal schedule, and Line 318: optimal objective value Line 423 of the supplement: I suggest the author(s) to give a standard reference for constant-approximation of k-median; e.g., “A Constant-Factor Approximation Algorithm for the k-Median Problem” (https://www.sciencedirect.com/science/article/pii/S0022000002918829) Questions For matching, I am wondering if experimentally one can find multiple predictions can help (for real world or synthetic data). Let's say k = 5 or 10 . In principle it may not; that is, the best single prediction (in expectation over the distribution) is generally very good for every single instance in the distribution. My intuition is that if the distribution is somewhat "concentrated" such that the instances generally look alike, then multiple predictions may not help. On the other hand, if the distribution has a "multiple clusters" structure, then having multiple predictions can provide much better coverage and therefore improve performance a lot. Is it possible to formalize this intuitions and give some theory, for matching or graph problems in general? The recent work by Chen, Silwal, Vakilian, and Zhang [Faster fundamental graph algorithms via learned predictions, ICML 22] shows that using (single) prediction can help to improve the run-time of graph algorithms, beyond the matching problem of Dinitz et al. I think the multiple prediction can be interesting more broadly in speeding up offline graph algorithms. This is another future direction the author(s) can explore. For online makespan minimization, can the fractional solution be rounded in an online fashion? A majority of the paper by Lattanzi et al is spent on rounding (their Section 4 and 5). I think their techniques can be applied here. Limitations As I mentioned, the paper does not give a general framework of addressing multiple predictions in learning-based algorithms. In fact, reading its title, I expect they would. Some experiments on matching could be very interesting.
NIPS
Title Algorithms with Prediction Portfolios Abstract The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick the best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm’s performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance. 1 Introduction An exciting recent line of research attempts to go beyond traditional worst-case analysis of algorithms by equipping algorithms with machine-learned predictions. The hope is that these predictions allow the algorithm to circumvent worst case lower bounds when the predictions are good, and approximately match them otherwise. The precise definitions and guarantees vary with different settings, but there have been significant successes in applying this framework for many different algorithmic problems, ranging from general online problems to classical graph algorithms (see Section 1.2 for a more detailed discussion of related work, and [33] for a survey). In all of these settings it turns out to be possible to define a “prediction” where the “quality” of the algorithm (competitive ratio, running time, etc.) depends the “error” of the prediction. Moreover, in at least some of these settings, it has been further shown that this prediction is actually learnable with a small number of samples, usually via standard ERM methods [18]. Previous work has shown the power of accurate predictions, and there are numerous examples showing improved performance in both theory and practice. However, developing accurate predictors remains an art, and a single predictor may not capture all of the subtleties of the instance space. Recently, researchers have turned to working with portfolios of predictors: instead of training a single model, train multiple models, with the hope that one of them will give good guarantees. ∗Work was done while the author was at Carnegie Mellon University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). It is easy to see why the best predictor in a portfolio may be significantly better than a one-size fits all predictor. First, many of the modern machine learning methods come with a slew of hyperparameters that require tuning. Learning rate, mini-batch size, optimizer choice, all of these have significant impact on the quality of the final solution. Instead of commiting to a single setting, one can instead try to cover the parameter space, with the hope that some of the predictors will generalize better than others. Second, problem instances themselves may come from complex distributions, consisting of many latent groups or clusters. A single predictor is forced to perform well on average, whereas multiple predictors can be made to “specialize” to each cluster. In order to take advantage of the increased accuracy provided by the portfolio approach, we must adapt algorithms with predictions to take advantage of multiple predictions. To capture the gains in performance, the algorithm must perform as if equipped with the best predictor, auto-tuning to use the best one available in the portfolio. However, it is easy to see that there should be a cost as the size of the portfolio grows. In the extreme, one can add every possible prediction to the portfolio, providing no additional information, yet now requiring high performance from the algorithm. Therefore, we must aim to minimize the dependence on the number of predictions in the portfolio. We remark that the high level set up may be reminiscent of expert- or bandit-learning literature. However, there is a critical distinction. In expert and bandit learning, we are given a sequence of problem instances, and the goal is to compete (minimize regret) with respect to the best prediction averaged over the whole sequence. On the other hand, in our setup, we aim to compete with the best predictor on a per-instance basis. Previous work on multiple predictions. Bhaskara et al. studied an online linear optimization problem where the learner seeks to minimize the regret, provided access to multiple hints [16]. Inspired by the work, Anand et al. recently studied algorithms with multiple learned predictions in [7], proving strong bounds for important online covering problems including online set cover, weighted caching, and online facility location. It was a significant extension of the work [22] which studied the rent-or-buy problem with access to two predictions. However, their techniques and results are limited to online covering problems. Moreover, they do not discuss the learning aspects at all: they simply assume that they are given k predictions, and their goal is to have competitive ratios that are based on the minimum error of any of the k predictions. (They actually compete against a stronger dynamic benchmark, but for our purposes this distinction is not important.) On the other hand Balcan et al. [14] look at this problem through a data driven algorithm lens and study the sample complexity and generalization error of working with k (as opposed to 1) parameter settings. The main difference from our work is that they also aim learn a selector, which selects one of the k parameters prior to beginning to solve the problem instance. In contrast, in this work we make the selection during the course of the algorithm, and sometimes switch back and forth while honing in on the best predictor. 1.1 Our Results and Contributions In this paper we study three fundamental problems, min-cost perfect matching, online load balancing, and non-clairvoyant scheduling for total completion time, in this new setting. Each of these has seen significant success in the single-prediction model but is not covered by previous multiple-prediction frameworks. Our results are primarily theoretical, however we have included a preliminary empirical validation of our algorithm for min-cost perfect matching in the supplementary material. For each of these we develop algorithms whose performance depends on the error of the best prediction, and explore the effect of the number of predictions, k. Surprisingly, in the case of matching and scheduling we show that using a limited number of predictions is essentially free, and has no asymptotic impact on the algorithm’s performance. For load balancing, on the other hand, we show that the cost of multiple predictions grows logarithmically with k, again implying a tangible benefit of using multiple predictions. We now describe these in more detail. Min-Cost Perfect Matching. We begin by showcasing our approach with the classical min-cost perfect matching problem in Section 3. This problem was recently studied by [17, 18] to show that it is possible to use learned predictions to improve running times of classical optimization problems. In particular, [18] showed it is possible to speed up the classical Hungarian algorithm by predicting dual values, and moreover that it is possible to efficiently (PAC-)learn the best duals. We show that simple modifications of their ideas lead to similar results for multiple predictions. Interestingly, we show that as long as k ≤ O( √ n), the extra “cost” (running time) of using k predictions is negligible compared to the cost of using a single prediction, so we can use up to √ n predictions “for free” while still getting running time depending on the best of these predictions. Moreover, since in this setting running time is paramount, we go beyond sample complexity to show that it is also computationally efficient to learn the best k predictions. Online Load Balancing with Restricted Assignments. We continue in Section 4 with the fundamental load balancing problem. In this problem there are m machines, and n jobs which appear in online fashion. Each job has a size, and a subset of machines that it can be assigned to. The goal is to minimize the maximum machine load (i.e., the makespan). This problem has been studied extensively in the traditional scheduling and online algorithms literature, and recently it has also been the subject of significant study given a single prediction [26–28]. In particular, Lattanzi, Lavastida, Moseley, and Vassilvitskii [26] showed that there exist per machine “weights” and an allocation function so that the competitive ratio of the algorithm depends logarithmically on the maximum error of the predictions. We show that one can use k predictions and incur an additional O(log k) factor in the competitive ratio, while being competitive with the error of the best prediction. Additionally, we show that learning the best k predicted weights (in a PAC sense) can be done efficiently. Non Clairvoyant Scheduling Finally, in Section 5 we move to the most technically complex part of this paper. We study the problem of scheduling n jobs on a single machine, where all jobs are released at time 0, but where we do not learn the length of a job until it actually completes (the non-clairvoyant model). Our objective is to minimize the sum of completion times. This problem has been studied extensively, both with and without predictions [24, 30, 35, 37]. Most recently, Lindermayr and Megow [30] suggested that we use an ordering as the prediction (as opposed to the more obvious prediction of job sizes), and use the difference between the cost induced by the predicted ordering and the cost induced by the instance-optimal ordering as the notion of “error”. In this case, simply following the predicted ordering yields an algorithm with error equal to the prediction error. We extend this to the multiple prediction setting, which turns out to be surprisingly challenging. The algorithm of [30] is quite simple: follow the ordering given by the prediction (and run a 2-competitive algorithm in parallel to obtain a worst-case backstop). But we obviously cannot do this when we are given multiple orderings! So we must design an algorithm which considers all k predictions to build a schedule that has error comparable to the error of the best one. Slightly more formally, we prove that we can bound the sum of completion times by (1 + ϵ)OPT plus poly(1/ϵ) times the error of the best prediction, under the mild assumption that no set of at most log log n jobs has a large contribution to OPT. To do this, we first use sampling techniques similar to those of [24] to estimate the size of the approximately ϵn’th smallest job without incurring much cost. We then use even more sampling and partial processing to determine for each prediction whether its ϵn prefix has many jobs that should appear later (a bad sequence) or has very few jobs that should not be in the prefix (a good sequence). If all sequences are bad then every prediction has large error, so we can use a round robin schedule and charge the cost to the prediction error. Otherwise, we choose one of the good orderings and follow it for its ϵn prefix (being careful to handle outliers). We then recurse on the remaining jobs. 1.2 Related Work As discussed, the most directly related papers are Anand et al. [7] and Balcan, Sandholm, and Vitercik [14]; these give the two approaches (multiple predictions and portfolio-based algorithm selection) that are most similar to our setting. The single prediction version of min-cost bipartite matching was studied in [17, 18], the single prediction version of our load balancing problem was considered by [26–28] (and a different though related load balancing problem was considered by [4]), and the single prediction version of our scheduling problem was considered by [30] with the same prediction that we use (an ordering) and earlier with different predictions by [24, 37, 39]. Online scheduling with estimates of the true processing times was considered in [11, 12]. More generally, there has been an enormous amount of recent progress on algorithms with predictions. This is particularly true for online algorithms, where the basic setup was formalized by [31] in the context of caching. For example, the problems considered include caching [25, 31, 38], secretary problems [9, 20], ski rental [5, 37, 39], and set cover [15]. There has also been recent work on going beyond traditional online algorithms, including work on running times [17, 18], algorithmic game theory [2, 21, 32], and streaming algorithms [1, 19, 23]. The learnability of predictions for online algorithms with predictions was considered by [6]. They give a novel loss function tailored to their specific online algorithm and prediction, and study the sample complexity of learning a mapping from problem features to a prediction. While they are only concerned with the sample complexity of the learning problem, we also consider the computational complexity, giving polynomial time O(1)-approximate algorithms for the learning problems associated with min-cost matching and online load balancing. The above is only a small sample of the work on algorithms with predictions. We refer the interested reader to a recent survey [33], as well as a recently set up website which maintains a list of papers in the area [29]. 2 Learnability When designing new methods in the algorithms with predictions setting, the predictions under consideration must satisfy two constraints. First, they should be useful to the algorithm, so that using the predictions allows the algorithm to achieve better running time, competitive ratio, or some other performance measure. Second, they must be learnable: it must be feasible to find good predictions given a set of problem instances. To rigorously prove learnability, we follow previous work [13, 18, 34] and focus on proving a bound on the sample complexity of finding the best predictions that generalize. Our main result shows that for a given problem, the pseudo-dimension of finding k predictions is Õ(k)2 factor larger than that for finding a single best predictor. We state the formal Theorem below, but defer the proof to the supplementary material. Theorem 2.1. Let F be a class of functions f : X → R with pseudo-dimension d and let Fk := {F (x) = minℓ∈[k] f ℓ(x) | f1, f2, . . . , fk ∈ F}. Then the pseudo-dimension of Fk is at most Õ(dk). Note that this directly implies that the sample complexity when looking for k predictions is a factor of k larger than that of a single predictor by the following well-known theorem. Theorem 2.2. [8, 34, 36] Let D be a distribution over a domain X and F be a class of functions f : X → [0, H] with pseudo-dimension dF . Consider S independent samples x1, x2, . . . , xS from D. There is a universal constant c0, such that for any ϵ > 0 and δ ∈ (0, 1), if S ≥ c0 ( H ϵ )2 (dF+ln(1/δ)) then we have ∣∣∣∣∣1s S∑ s=1 f(xi)− Ex∼D[f(x)] ∣∣∣∣∣ ≤ ϵ for all f ∈ F with probability at least 1− δ. 3 Minimum Cost Bipartite Matching with Predicted Duals In this section we study the minimum cost bipartite matching problem with multiple predictions. The case of a single prediction has been considered recently [17, 18], where they used dual values as a prediction and showed that the classical Hungarian algorithm could be sped up by using appropriately learned dual values. Our goal in this section is to extend these results to multiple predictions, i.e., multiple duals. In particular, in Section 3.2 we show that we can use k duals and get running time comparable to the time we would have spent if we used the single best of them in the algorithm of [18], with no asymptotic loss if k is at most O( √ n). Then in Section 3.3 we show that k predictions can be learned with not too many more samples (or running time) than learning a single prediction. 2Õ(·) suppresses logarithmic factors 3.1 Problem Definition and Predicted Dual Variables In the minimum cost bipartite matching problem we are given a bipartite graph G = (V,E) with n = |V | vertices and m = |E| edges, with edge costs c ∈ ZE . The objective is to output a perfect matching M ⊆ E which minimizes the cost c(M) := ∑ e∈E ce. This problem is exactly captured by the following primal and dual linear programming formulations. min ∑ e∈E cexe (MWPM-P)∑ e∈N(i) xe = 1 ∀i ∈ V xe ≥ 0 ∀e ∈ E max ∑ i∈V yi (MWPM-D) yi + yj ≤ ce ∀e = ij ∈ E Dinitz et al. [18] studied initializing the Hungarian algorithm with a prediction ŷ of the optimal dual solution y∗. They propose an algorithm which operates in two steps. First, the predicted dual solution ŷ may not be feasible, so they give an O(n+m) time algorithm which recovers feasibility (which we refer to as Make-Feasible). Second, the now-feasible dual solution is used in a primal-dual algorithm such as the Hungarian algorithm (which we refer to as Primal-Dual) and they show that the running time depends on the ℓ1 error in the predicted solution. In addition to this they show that learning a good initial dual solution is computationally efficient with low sample complexity. More formally, they proved the following theorems. Theorem 3.1 (Dinitz et al. [18]). Let (G, c) be an instance of minimum cost bipartite matching and ŷ be a prediction of an optimal dual solution y∗. There exists an algorithm which returns an optimal solution and runs in time O(m √ n · ∥y∗ − ŷ∥1). Theorem 3.2 (Dinitz et al. [18]). Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs a solution ŷ such that E(G,c)∼D [∥y∗(G, c)− ŷ∥1] ≤ min y E(G,c)∼D [∥y∗(G, c)− y∥1] + ϵ with probability 1− δ where S = poly(n, 1ϵ , 1 δ ). 3.2 Using k Predicted Dual Solutions Efficiently Given k predicted dual solutions ŷ1, ŷ2, . . . , ŷk, we would like to efficiently determine which solution has the minimum error for the given problem instance. Note that the predicted solutions may still be infeasible and that we do not know the target optimal dual solution y∗. We propose the following simple algorithm which takes as input k predicted solutions and whose running time depends only on the ℓ1 error of the best predicted solution. First, we make each predicted solution feasible, just as before. Next, we select the (now-feasible) dual solution with highest dual objective value and proceed running the primal-dual algorithm with only that solution. See Algorithm 1 for pseudo-code. Algorithm 1 Minimum cost matching with k predicted dual solutions 1: procedure k-PREDICTEDPRIMAL-DUAL(G, c, ŷ1, ŷ2, . . . , ŷk) 2: for ℓ ∈ [k] do 3: yℓ ←MakeFeasible(G, c, ŷℓ) 4: end for 5: ℓ′ ← argmaxℓ∈[k] ∑ i∈V y ℓ i 6: M ←Primal-Dual(G, c, yℓ′) 7: Return M 8: end procedure We have the following result concerning Algorithm 1. To interpret this result, note that the cost for increasing the number of predictions is O(k(n+m)), which will be dominated by the m √ n term we pay for running the Hungarian algorithm unless k is extremely large (certainly larger than √ n) or there is a prediction with 0 error (which is highly unlikely). Hence we can reap the benefit of a large number of predictions “for free”. Theorem 3.3. Let (G, c) be a minimum cost bipartite matching instance and let ŷ1, ŷ2, . . . , ŷk be predicted dual solutions. Algorithm 1 returns an optimal solution and runs in time O(k(n+m) + m √ n ·minℓ∈[k] ∥y∗ − ŷℓ∥1). We defer the proof to the supplementary material. But correctness is essentially direct from [18], and the running time requires just a simple modification of the analysis of [18]. 3.3 Learning k Predicted Dual Solutions Next we extend Theorem 3.2 to the setting where we output k predictions. Let D be a distribution over problem instances (G, c) on n vertices. We show that we can find the best set of k predictions. More formally, we prove the following theorem. Theorem 3.4. Let D be an unknown distribution over instances (G, c) on n vertices and let y∗(G, c) be an optimal dual solution for the given instance. Given S independent samples from D, there is a polynomial time algorithm that outputs k solutions ŷ1, ŷ2, . . . , ŷk such that E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− ŷℓ∥1 ] ≤ O(1) · min y1,y2,...,yk E(G,c)∼D [ min ℓ∈[k] ∥y∗(G, c)− yℓ∥1 ] + ϵ with probability 1− δ where S = poly(n, k, 1ϵ , 1 δ ). The proof of this theorem can be found in the supplementary material, but it is straightforward. The sample complexity is due to combining Theorem 2.1 with Theorem 3.2 (or more precisely, with the pseudo-dimension bound which implies Theorem 3.2). The O(1)-approximation factor and polynomial running time is from the observation that the ERM problem in this setting is just an instance of the k-median clustering problem. 4 Online Load Balancing with Predicted Machine Weights We now apply our framework to online load balancing with restricted assignments. In particular, we consider proportional weights, which have been considered in prior work [26–28]. Informally, we show in Section 4.2 that if β is the cost of the best of the k predictions, then even without knowing a priori which prediction is best, we get cost of O(β log k). Then in Section 4.3 we show that it does not take many samples to actually learn the best k predictions. 4.1 Problem Definition and Proportional Weights In online load balancing with restricted assignments there is a sequence of n jobs which must be assigned to m machines in an online fashion. Upon seeing job j, the online algorithm observes its size pj > 0 and a neighborhood N(j) ⊆ [m] of feasible machines. The algorithm must then choose some feasible machine i ∈ N(j) to irrevocably assign the job to before seeing any more jobs in the sequence. We also consider fractional assignments, i.e. vectors belonging to the set X = {x ∈ Rm×n+ | ∀j ∈ [n], ∑ i xij = 1, and xij = 0 ⇐⇒ i /∈ N(j)}. Prior work studied the application of proportional weights[3, 26–28]. Intuitively, a prediction in this setting is a weighting of machines, which then implies an online assignment, which is shown to be near-optimal. Slightly more formally, suppose that we are given weights wi for each machine i. Then each job j is fractionally assigned to machine i to a fractional amount of wi∑ i′∈N(j) wi′ . Notice that given weights, this also gives an online assignment. It is known that there exist weights for any instance where the fractional solution has a near optimal makespan, even though there are only m “degree of freedom” in the weights compared to mn in an assignment. That is, for all machines i,∑ j∈[n] pj · wi∑ i′∈N(j) wi′ is at most a (1 + ϵ) factor larger than the optimal makespan for any constant ϵ > 0 [3, 26]. Let w∗ be a set of near optimal weights for a given instance. Lattanzi et al. [26] showed the following theorem: Theorem 4.1. Given predicted weights ŵ, there is an online fractional algorithm which has makespan O(log(η(ŵ, w∗)OPT), where η(ŵ, w∗) := maxi∈[m] max( ŵiw∗i , w∗i ŵi ) to be the error in the prediction. Moreover, this fractional assignment can be converted online to an integral assignment while losing only an O(log logm) factor in the makespan [26, 28]. Thus, we focus on constructing fractional assignments that are competitive with the best prediction in hindsight. 4.2 Combining Fractional Solutions Online Given k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk, we want to give an algorithm which is competitive against the minimum error among the predicted weights, i.e. we want the competitiveness to depend upon ηmin := minℓ∈[k] η(ŵℓ, w∗). The challenge is that we do not know up front which ℓ ∈ [k] will yield the smallest error, but instead learn this in hindsight. For each ℓ ∈ [k], let xℓ, be the resulting fractional assignment from applying the fractional online algorithm due to [26] with weights ŵℓ. This fractional assignment is revealed one job at a time. We give an algorithm which is O(log k)-competitive against any collection of k fractional assignments which are revealed online. Moreover, our result applies to the unrelated machines setting, in which each job has a collection of machine-dependent sizes {pij}i∈[m]. The algorithm is based on the doubling trick and is similar to results in [10] which apply to metrical task systems. Let β := minℓ∈[k] maxi ∑ j pijx ℓ ij be the best fractional makespan in hindsight. As in previous work, our algorithm is assumed to know β, an assumption that can be removed [26]. At a high level, our algorithm maintains a set A ⊆ [k] of solutions which are good with respect to the current value of β, averaging among these. See Algorithm 2 for a detailed description. We have the following theorem. Theorem 4.2. Let x1, x2, . . . , xk be fractional assignments which are revealed online. If Algorithm 2 is run with β := minℓ∈[k] maxi ∑ j pijx ℓ ij , then it yields a solution of cost O(log k) · β and never reaches the fail state (line 7 in Algorithm 2). Let βℓ = maxi ∑ j pijx ℓ ij and OPT be the optimal makespan. Theorem 4.1 shows that βℓ ≤ O(log ηℓ)OPT. The following corollary is then immediate: Corollary 4.3. Let w1, w2, . . . , wk be the predicted weights with errors η1, η2, . . . , ηk. Then Algorithm 2 returns a fractional assignment with makespan at most OPT ·O(log k) ·minℓ∈[k] log(ηℓ). Algorithm 2 Algorithm for combining fractional solutions online for load balancing. 1: procedure COMBINE-LOADBALANCING(β) 2: A← [k] ▷ Initially all solutions are good 3: for each job j do 4: Receive the assignments x1, x2, . . . , xk 5: A(j, β)← {ℓ ∈ A | ∀i ∈ [m], xℓij > 0 =⇒ pijxℓij ≤ β} 6: if A = ∅ or A(j, β) = ∅ then 7: Return “Fail” 8: end if 9: ∀i ∈ [m], xij ← 1|A(j,β)| ∑ ℓ∈A(j,β) x ℓ ij 10: B ← {ℓ ∈ A | maxi∈[m] ∑ j′≤j pij′x ℓ ij′ > β} ▷ Bad solutions w.r.t. β 11: A← A \B 12: end for 13: end procedure We defer the proof of Theorem 4.2 to the Supplementary material. 4.3 Learning k Predicted Weight Vectors We now turn to the question of showing how to learn k different predicted weight vectors ŵ1, ŵ2, . . . , ŵk. Recall that there is an unknown distribution D over sets of n jobs from which we receive independent samples J1, J2, . . . , JS . Our goal is to show that we can efficiently learn (in terms of sample complexity) k predicted weight vectors to minimize the expected minimum error. Let w∗(J) be the correct weight vector for instance J and let η(w,w′) = maxi∈[m] max(wiw′i , w′i wi ) be the error between a pair of weight vectors. We have the following result. Theorem 4.4. Let D be an unknown distribution over restricted assignment instances on n jobs and let w∗(J) be a set of good weights for instance J . Given S independent samples from D, there is a polynomial time algorithm that outputs k weight vectors ŵ1, ŵ2, . . . , ŵk such that EJ∼D [ minℓ∈[k] log(η(ŵ ℓ, w∗(J)) ] ≤ O(1) · minw1,w2,...,wk E [ minℓ∈[k] log(η(w ℓ, w∗(J)) ] + ϵ with probability 1− δ, where S = poly(m, k, 1ϵ , 1 δ ) The proof of Theorem 4.4 is deferred to the Supplementary material, but we note that to get a polynomial time algorithm we carry out an interesting reduction to k-median clustering. Namely, we show that the function d(w,w′) := log(η(w,w′)) satisfies the triangle inequality and thus forms a metric space. 5 Scheduling with Predicted Permutations In this problem there are n jobs, indexed by 1, 2, . . . , n, to be scheduled on a single machine. We assume that they are all available at time 0. Job j has size pj and needs to get processed for pj time units to complete. If all job sizes are known a priori, Shortest Job First (or equivalently Shortest Remaining Time First), which processes jobs in non-decreasing order of their size, is known to be optimal for minimizing total completion time. We assume that the true value of pj is unknown and is revealed only when the job completes, i.e. the non-clairvoyant setting. In the non-clairvoyant setting, it is known that Round-Robin (which processes all alive jobs equally) is 2-competitive and that this is the best competitive ratio one can hope for [35]. We study this basic scheduling problem assuming certain predictions are available for use. Following the recent work by Lindermayr and Megow [30], we will assume that we are given k orderings/sequences as prediction, {σℓ}ℓ∈[k]. Each σℓ is a permutation of J := [n]. Intuitively, it suggests an ordering in which jobs should be processed. This prediction is inspired by the aforementioned Shortest Job First (SJF) as an optimal schedule can be described as an ordering of jobs, specifically increasing order of job sizes. For each σℓ, its error is measured as η(J, σℓ) := COST(J, σℓ)−OPT(J), where COST(J, σℓ) denotes the objective of the schedule where jobs are processed in the order of σℓ and OPT(J) denotes the optimal objective value. We may drop J from notation when it is clear from the context. As observed in [30], the error can be expressed as η(J, σℓ) = ∑ i<j∈J I ℓ i,j · |pi − pj |, where Iℓi,j is an indicator variable for ‘inversion’ that has value 1 if and only if σℓ predicts the pairwise ordering of i and j incorrectly. That is, if pi < pj , then the optimal schedule would process i before j; here Iℓi,j = 1 iff i ≻σℓ j. As discussed in [30], this error measure satisfies two desired properties, monotonicity and Lipschitzness, which were formalized in [24]. Our main result is the following. Theorem 5.1. Consider a constant ϵ > 0. Suppose that for any S ⊆ J with |S| = Θ( 1ϵ4 (log log n+ log k + log(1/ϵ))), we have OPT(S) ≤ cϵ · OPT(J) for some small absolute constant c. Then, there exists a randomized algorithm that yields a schedule whose expected total completion time is at most (1 + ϵ)OPT + (1 + ϵ) 1ϵ5 η(J, σℓ) for all ℓ ∈ [k]. As a corollary, by running our algorithm with 1− ϵ processing speed and simultaneously running Round-Robin with the remaining ϵ of the speed, the cost increases by a factor of at most 11−ϵ while the resulting hybrid algorithm is 2/ϵ-competitive.3 5.1 Algorithm To make our presentation more transparent we will first round job sizes. Formally, we choose ρ uniformly at random from [0, 1). Then, round up each job j’s size to the closest number of the form (1 + ϵ)ρ+t for some integer t. Then, we scale down all job sizes by (1 + ϵ)ρ factor. We will present our algorithm and analysis assuming that every job has a size equal to a power of (1 + ϵ). In the 3This hybrid algorithm is essentially the preferential time sharing [24, 30, 37]. Formally, we run our algorithm ignoring RR’s processing and also run RR ignoring our algorithm; this can be done by a simple simulation. Thus, we construct two schedules concurrently and each job completes at the time when it does in either schedule. This type of algorithms was first used in [37]. supplementary we show how to remove this assumption without increasing our algorithm’s objective by more than 1 + ϵ factor in expectation. We first present the following algorithm that achieves Theorem 5.1 with |S| = Θ( 1ϵ4 (log n+ log k)). The improved bound claimed in the theorem needs minor tweaks of the algorithm and analysis and they are deferred to the supplementary material. Our algorithm runs in rounds. Let Jr be the jobs that complete in round r ≥ 1. For any subset S of rounds, JS := ∪r∈SJr. For example, J≤r := J1 ∪ . . . ∪ Jr. Let nr := |J≥r| = n− |J<r| denote the number of alive jobs at the beginning of round r. Fix the beginning of round r. The algorithm processes the job in the following way for this round. If nr ≤ 1ϵ4 (log n+ log k), we run Round-Robin to complete all the remaining jobs, J≥r. This is the last round and it is denoted as round L+ 1. Otherwise, we do the following Steps 1-4: Step 1. Estimating ϵ-percentile. Roughly speaking, the goal is to estimate the ϵ-percentile of job sizes among the remaining jobs. For a job j ∈ J≥r, define its rank among J≥r as the number of jobs no smaller than j in J≥r breaking ties in an arbitrary yet fixed way. Ideally, we would like to estimate the size of job of rank ϵnr, but do so only approximately. The algorithm will find q̃r that is the size of a job whose rank lies in [ϵ(1 − ϵ)nr, ϵ(1 + ϵ)nr]. To handle the case that there are many jobs of the same size q̃r, we estimate yr the number of jobs no bigger than q̃r; let ỹr denote our estimate of yr. We will show how we can do these estimations without spending much time by sampling some jobs and partially processing them in Round-Robin manner (the proof of the following lemma can be found in the supplementary material.) Lemma 5.2. W.h.p. the algorithm can construct estimates q̃r and ỹr in time at most O(q̃r 1ϵ2 log n) such that there is a job of size q̃r whose rank lies in [ϵ(1− ϵ)nr, ϵ(1 + ϵ)nr] and |ỹr − yr| ≤ ϵ2nr. Step 2. Determining Good and Bad Sequences. Let σrℓ denote σℓ with all jobs completed in the previous rounds removed and with the relative ordering of the remaining jobs fixed. Let σrℓ,ϵ denote the first ỹr jobs in the ordering. We say a job j is big if pj > q̃r; middle if pj = q̃r; small otherwise. Using sampling and partial processing we will approximately distinguish good and bad sequences. Informally σrℓ is good if σ r ℓ,ϵ has few big jobs and bad if it does many big jobs. The proof of the following lemma can be found in the supplementary material. Lemma 5.3. For all ℓ ∈ [k], we can label sequence σrℓ either good or bad in time at most O(q̃r 1 ϵ2 (log n + log k)) that satisfies the following with high probability: If it is good, σ r ℓ,ϵ has at most 3ϵ2nr big jobs; otherwise σrℓ,ϵ has at least ϵ 2nr big jobs. Step 3. Job Processing. If all sequences are bad, then we process all jobs, each up to q̃r units in an arbitrary order. Otherwise, we process the first ỹr jobs in an arbitrary good sequence, in an arbitrary order, each up to q̃r units. Step 4. Updating Sequences. The jobs completed in this round drop from the sequences but the remaining jobs’ relative ordering remains fixed in each (sub-)sequence. For simplicity, we assume that partially processed jobs were never processed—this is without loss of generality as this assumption only increases our schedule’s objective. 5.2 Analysis of the Algorithm’s Performance We defer the analysis of the above algorithm (the proof of Theorem 5.1) to the supplementary material, as it is quite technical and complex. At a very high level, though, we use the fact that the error in each prediction can be decomposed into pair-wise inversions, and moreover we can partition the inversions into the rounds of the algorithm in which they appear. Then we look at each round, and split into two cases. First, if all sequences are bad then every prediction has large error, so we can simply use Round Robin (which is 2-competitive against OPT) and the cost can be charged to the error of any prediction. Second, if there is a good sequence, then in any good sequence the number of big jobs is small (so we do not spend much time processing them), and we therefore complete almost all of the non-big jobs. Here, we crucially use the fact that we can process the first ϵ fraction of jobs in a sequence in an arbitrary order remaining competitive against the sequence. Finally, we show that all of the additional assumptions and costs (e.g., rounding processing times and the cost due to sampling) only change our performance by a 1 + ϵ factor. Getting all of these details right requires much care. 5.3 Learning k Predicted Permutations Now we show that learning the best k permutations has polynomial sample complexity. Theorem 5.4. Let D be an unknown distribution of instances on n jobs. Given S independent samples from D, there is an algorithm that outputs k permutations σ̂1, σ̂2, . . . , σ̂k such that EJ∼D [ minℓ∈[k] η(J, σ̂ℓ) ] ≤ minσ1,σ2,...,σk EJ∼D [ minℓ∈[k] η(J, σℓ) ] + ϵ with probability 1 − δ, where S = poly(n, k, 1ϵ , 1 δ ). Proof. The algorithm is basic ERM, and the polynomial sample complexity follows from Theorem 2.1 and Theorem 20 in Lindermayr and Megow [30]. 6 Conclusion Despite the explosive recent work in algorithms with predictions, almost all of this work has assumed only a single prediction. In this paper we study algorithms with multiple machine-learned predictions, rather than just one. We study three different problems that have been well-studied in the single prediction setting but not with multiple predictions: faster algorithms for min-cost bipartite matching using learned duals, online load balancing with learned machine weights, and non-clairvoyant scheduling with order predictions. For all of the problems we design algorithms that can utilize multiple predictions, and show sample complexity bounds for learning the best set of k predictions. Demonstrating the effectiveness of our algorithms (and the broader use of multiple predictions) empirically is an interesting direction for further work. Surprisingly, we have shown that in some cases, using multiple predictions is essentially “free.” For instance, in the case of min-cost perfect matching examining k = O( √ n) predictions takes the same amount of time as one round of the Hungarian algorithm, but the number of rounds is determined by the quality of the best prediction. In contrast, for load balancing, using k predictions always incurs an O(log k) cost, so using a constant number of predictions may be best. More generally, studying this trade-off between the cost and the benefit of multiple predictions for other problems remains an interesting and challenging open problem. Acknowledgments and Disclosure of Funding Michael Dinitz was supported in part by NSF grant CCF-1909111. Sungjin Im was supported in part by NSF grants CCF-1617653, CCF-1844939 and CCF-2121745. Thomas Lavastida and Benjamin Moseley were supported in part by NSF grants CCF-1824303, CCF-1845146, CCF-2121744 and CMMI-1938909. Benjamin Moseley was additionally supported in part by a Google Research Award, an Infor Research Award, and a Carnegie Bosch Junior Faculty Chair.
1. What is the focus of the paper regarding optimization algorithms with predictions? 2. What are the strengths and weaknesses of the proposed approach, particularly in its technical contributions and incremental nature? 3. How does the reviewer assess the clarity and organization of the paper's content? 4. What are the questions raised by the reviewer regarding the paper's contribution and its limitations? 5. Does the reviewer have any concerns about the potential negative societal impacts of the theoretical work presented in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Over the last 10 years, a significant line of work has focused on how to augment optimization algorithms with predictions so that (a) improved performance is achieved when the predictions are accurate and (b) worst-case performance is recovered when the predictions are not accurate. In this work, the authors build on the more recent idea to augment optimization algorithms with k predictions. For three canonical tasks, the authors quantify the computational cost (which they show is mostly quite small) to obtain equivalent performance to augmenting the optimization algorithm with the (unknown) best of the k predictors. Strengths And Weaknesses The paper is clearly written (although the supplementary material is hard to follow at first, due to interleaving theorem statements, proofs, and intuition like a textbook), and the problem well-motivated. The results are novel and the proofs all seem correct, although the technical contributions of the learning theory arguments are minimal. (For example, I believe Theorem 2.1 can be extracted from (the proofs of) more general results on compositions of VC classes. See Theorem 6.1 of https://arxiv.org/pdf/1105.4618.pdf, attributed to a 1997 textbook.) The main limitation of the work is the incremental nature of the contribution and the utility of the results for demonstrating actual improved performance. For all three problems considered, the authors thoroughly cite how past work has considered the single predictor impact. The main contribution is improving (using the notation of Section 3, but the contribution is the same for all three problems) min y ^ E ( G , c ) ∼ D | | y ∗ ( G , c ) − y ^ | | 1 to min y ^ 1 , … , y ^ k E ( G , c ) ∼ D min ℓ ∈ [ k ] | | y ∗ ( G , c ) − y ^ ℓ | | 1 . Obviously the second term is no larger than the first term. However, since there is a (sometimes small, but not zero) addition to computational complexity in order to achieve the second term, I think the main quantity of interest is how much smaller the second term gets. This should depend on the problem parameters, k , and some properties of D (e.g., how symmetric it is?). This will provide an answer to the question: is the improvement in performance worth the computational cost of considering k predictors over a single predictor? I do not mean to penalize the authors for their useful characterization of the existing literature, and hence I rate the paper as borderline. If the authors can quantify exactly how much better using k predictors can be in terms of the relevant parameters for the three problems of interest, this would be sufficient to improve my rating to an accept. Questions I've detailed this in the body of my review. Limitations I see no negative societal impacts of this theoretical work.
NIPS
Title TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels Abstract State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network’s empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to +36% on FMNIST and +37% on CIFAR10 when clients have dissimilar data. 1 Introduction Federated learning is a newly emerging paradigm for machine learning where multiple data holders (clients) collaborate to train a model on their combined dataset. Clients only share partially trained models and other statistics computed from their dataset, keeping their raw data local and private [53, 37]. By obviating the need for a third party to collect and store clients’ data, federated learning has several advantages over the classical, centralized paradigm [14, 31, 23]: it ensures clients’ consent is tied to the specific task at hand by requiring active participation of the clients in training, confers some basic level of privacy, and has the potential to make machine learning more participatory in general [43, 36]. Further, widespread legislation of data portability and privacy requirements (such as GDPR and CCPA) might even make federated learning a necessity [59]. Collaboration among clients is most attractive when clients have very different subsets of the combined dataset (data heterogeneity). For example, different autonomous driving companies may only be able to collect data in weather conditions specific to their location, whereas their vehicles would need to function under all conditions. In such a scenario, it would be mutually beneficial for companies in geographically diverse locations to collaborate and share data with each other. Further, in such settings, clients are physically separated and connected by ad-hoc networks with large latencies and limited bandwidth. This is especially true when clients are edge devices such as mobile phones, IoT sensors, etc. Thus, communication efficiency is crucial for practical federated learning. However, it is precisely under such circumstances (large data heterogeneity and low communication) that current 36th Conference on Neural Information Processing Systems (NeurIPS 2022). algorithms fail dramatically [27, 48, 39, 61, 71, 1, 46, 3, 72, etc.]. This motivates our central question: Why do current federated methods fail in the face of data heterogeneity—and how can we fix them? Our solution. We make two main observations: (i) We show that, even with data heterogeneity, linear models can be trained in a federated manner through gradient correction techniques such as SCAFFOLD [39]. While this observation is promising, it alone remains limited, as linear models are not rich enough to solve practical problems of interest (e.g., those that require feature learning). (ii) We shed light on why current federated algorithms struggle to train deep, nonconvex models. We observe that the failure of existing methods for neural networks is not uniform across the layers. The early layers of the network do in fact learn useful features, but the final layers fail to make use of them. Specifically, federated optimization applied to this nonconvex problem results in distorted final layers. These observations suggest a train-convexify-train federated algorithm, which we call TCT: first, use any off-the-shelf federated algorithm [such as FedAvg, 53] to train a deep model to extract useful features; then, compute a convex approximation of the deep model using its empirical Neural Tangent Kernel (eNTK) [34, 44, 20, 51, 75], and use gradient correction methods such as SCAFFOLD to train the final model. Effectively, the second-stage features freeze the features learned in the first stage and fit a linear model over them. We show that this simple strategy is highly performant on a variety of tasks and models—we obtain accuracy gains up to 36% points on FMNIST with a CNN, 37% points on CIFAR10 with ResNet18-GN, and 16% points on CIFAR100 with ResNet18-GN. Further, its convergence remains unaffected even by extreme data heterogeneity. Finally, we show that given a pre-trained model, our method completely closes the gap between centralized and federated methods. 2 Related Work Federated learning. There are two main motivating scenarios for federated learning (FL). The first is where internet service companies (e.g., Google, Facebook, Apple, etc.) want to train machine learning models over their users’ data, but do not want to transmit raw personalized data away from user devices [60, 8]. This is the setting of cross-device federated learning and is characterized by an extremely large number of unreliable clients, each of whom has very little data and the collections of data are assumed to be homogeneous [37, 10, 38, 8]. The second motivating scenario is when valuable data is split across different organizations, each of whom is either protected by privacy regulation or is simply unwilling to share their raw data. Such “data islands” are common among hospital networks, financial institutions, autonomous-vehicle companies, etc. This is known as cross-silo federated learning and is characterized by a few highly reliable clients, who potentially have extremely diverse data. In this work, we focus on the latter scenario. Metrics in FL. FL research considers numerous metrics, such as fairness across users [55, 47, 62], formal security and privacy guarantees [9, 60, 21, 56], robustness to corrupted agents and corrupted training data [7, 64, 19, 40, 26], preventing backdoors at test time [6, 66, 69, 52], etc. While these concerns are important, the main goal of FL (and our work) is to achieve high accuracy with minimal communication [53]. Clients are typically geographically separated yet need to communicate large deep learning models over unoptimized ad-hoc networks [37]. Finally, we focus on the setting where all users are interested in training the same model over the combined dataset. This is in contrast to model-agnostic protocols [49, 58, 3] or personalized federated learning [16, 18, 78, 13, 42, 12]. Finally, we focus on minimizing the number of rounds required. Our approach can be combined with communication compression, which reduces bits sent per round [67, 4, 24, 65]. Federated optimization. Algorithms for FL proceed in rounds. In each round, the server sends a model to the clients, who partially train this model using their local compute and data. The clients send these partially trained models back to the server who then aggregates them, finishing a round. FedAvg [53], which is the de facto standard FL algorithm, uses SGD to perform local updates on the clients and aggregates the client models by simply averaging their parameters. Unfortunately, however, FedAvg has been observed to perform poorly when faced with data heterogeneity across the clients [27, 48, 39, 61, 71, 1, 46, 3, 72, 17, etc.]. Theoretical investigations of this phenomenon [39, 76] showed that this was a result of gradient heterogeneity across the clients. Consider FedAvg initialized with the globally optimal model. If this model is not also optimal for each of the clients as well, the local updates will push it away from the global optimum. Thus, convergence would require a careful tuning of hyper-parameters. To overcome this issue, SCAFFOLD [39] and FedDyn [1] propose to use control variates to correct for the biases of the individual clients akin to variance reduction [35, 15]. This gradient correction is applied in every local update by the client and provably nullifies the effect of gradient heterogeneity [39, 54, 12]. However, as we show here, such methods are insufficient to overcome high data heterogeneity especially for deep learning. Other, more heuristic approaches to combat gradient heterogeneity include using a regularizer [48] and sophisticated server aggregation strategies such as momentum [28, 70, 50] or adaptivity [61, 38, 11]. A second line of work pins the blame on performance loss due to averaging nonconvex models. To overcome this, Singh and Jaggi [63], Yu et al. [81] propose to learn a mapping between the weights of the client models before averaging, Afonin and Karimireddy [3] advocates a functional perspective and replaces the averaging step with knowledge distillation, and Wang et al. [74], Li et al. [46], Tan et al. [68] attempt to align the internal representations of the client models. However, averaging is unlikely to be the only culprit since FedAvg does succeed under low heterogeneity, and averaging nonconvex models can lead to improved performance [33, 77]. Neural Tangent Kernels (NTK) and neural network linearization. NTK was first proposed to analyze the limiting behavior of infinitely wide networks [34, 44]. While NTK with MSE may be a bad approximation of real-world finite networks in general [22], it approximates the fine-tuning of a pre-trained network well [57], especially with some minor modifications [2]. That is, NTK cannot capture feature learning but does capture how a model utilizes learnt features better than last/mid layer activations. 3 The Effect of Nonconvexity In this section, we investigate the poor performance of FedAvg [53] and SCAFFOLD [39] empirically in the setting of deep neural networks, focusing on image classification with a ResNet-18. To construct our federated learning setup, we split the CIFAR-10 dataset in a highly heterogeneous manner among ten clients. We either assign each client two classes (denoted by #C=2) or distribute samples according to a Dirichlet distribution with ↵ = 0.1 (denoted by ↵=0.1). For more details, see Section 5.1. Insufficiency of gradient correction methods. Current theoretical work [e.g., 39, 61, 1, 73] attributes the slowdown from data heterogeneity to the individual clients having varying local optima. If no single model is simultaneously optimal for all clients, then the updates of different clients can compete with and distort each other, leading to slow convergence. This tension is captured by the variance of the updates across the clients [client gradient heterogeneity, see 72]. Gradient correction methods such as SCAFFOLD [39] and FedDyn [1] explicitly correct for this and are provably unaffected by gradient heterogeneity for both convex and nonconvex losses. These theoretical predictions are aligned with the results of Figure 1(a), where the loss landscape is convex: SCAFFOLD is relatively unaffected by the level of heterogeneity and consistently outperforms FedAvg. In particular, performance is largely dictated by the algorithm and not the data distributions. This shows that client gradient heterogeneity captures the difficulty of the problem well. On the other hand, when training a ResNet-18 model with nonconvex loss landscape, Figure 1(b) shows that both FedAvg and SCAFFOLD suffer from data heterogeneity. This is despite the theory of gradient correction applying to both convex and nonconvex losses. Further, the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization (not fitting the training data) rather than generalization. Thus, while the current theory makes no qualitative distinctions between convex and nonconvex convergence, the practical behavior of algorithms in these settings is very different. Such differences between theoretical predictions and practical reality suggests that black-box notions such as gradient heterogeneity are insufficient for capturing the difficulty of training deep models. Ease of feature learning. We now dive into how a ResNet-18 trained with FedAvg (56.9% accuracy) differs from the centralized baseline (91.9% accuracy). We first apply linear probing to the FedAvg model (i.e., retraining with all but the output layer frozen). Note that this is equivalent to (convex) logistic regression over the last-layer activations. This simple procedure produces a striking jump from 56.9% to 77.9% accuracy. Thus, of the 35% gap in accuracy between the FedAvg and centralized models, 21% may be attributed to a failure to optimize the linear output layer. We next extend this experiment towards probing the information content of other layers. Given a FedAvg-trained model, we can use centralized training to retrain only the last ` layers while keeping the rest of the (7 `) layers (or ResNet blocks) frozen. We can also perform this procedure starting from a randomly initialized model. The performance difference between these two models can be attributed to the information content of the frozen (7 `) layers of the FedAvg model. Table 1 summarizes the results of this experiment. The large difference in accuracy (up to 42.6%) indicates the initial layers of the FedAvg model have learned useful features. There continues to be a gap between the FedAvg features and random features in the earlier layers as well,1 meaning that all layers of the FedAvg model learn useful features. We conjecture this is because from the perspective of earlier layers which perform simple edge detection, the tasks are independent of labels and the clients are i.i.d. However, the higher layers are more specialized and the effect of the heterogeneity is stronger. 4 Method Based on the observations in Section 3, we propose train-convexify-train (TCT) as a method for overcoming data heterogeneity when training deep models in a federated setting. Our high-level 1The significant decrease in the gap as we go down the layers may be because of the skip connections in the lower ResNet blocks which allow the random frozen layers to be sidestepped. This underestimates the true utility and information content in the earlier FedAvg layers. intuition is that we want to leverage both the features learned from applying FedAvg to neural networks and the effectiveness of convex federated optimization. More specifically, we perform several rounds of “bootstrap” FedAvg to learn features before solving a convexified version of the original optimization problem. 4.1 Computing the Empirical Neural Tangent Kernel To sidestep the challenges presented by nonconvexity, we describe how we approximate a neural network by its “linearization.” Given a neural network f( · ; ✓0) with weights ✓0 2 RP mapping inputs x 2 RD to RC , we replace it by its empirical neural tangent kernel (eNTK) approximation at ✓0 given by f(x; ✓) ⇡ f(x; ✓0) + (✓ ✓0)> @ @✓ f(x; ✓0), at each x 2 RD. Under this approximation, f(x; ✓) is a linear function of the “feature vector” (f(x; ✓0), @ @✓f(x; ✓0)) and the original nonconvex optimization problem becomes (convex) linear regression with respect to these features.2 Leveraging NTK for solving federated optimization problems has also been studied in previous work [29, 82]. To reduce the computational burden of working with the eNTK approximation, we make two further approximations: First, we randomly reinitialize the last layer of ✓0 and only consider @@✓f(x; ✓0) with respect to a single output logit. Over the randomness of this reinitialization, E[f(x; ✓0)] = 0. Moreover, given the random reinitialization, all the output logits of f(x; ✓0) are symmetric. These observations mean each data point x can be represented by a P -dimensional feature vector @ @✓f1(x; ✓0), where f1( · ; ✓0) refers to the first output logit. Then, we apply a dimensionality reduction by subsampling p random coordinates from this P -dimensional featurization.3 In our setting, this sub-sampling has the added benefit of reducing the number of bits communicated per round. In summary, we transform our original (nonconvex) optimization problem over a neural network initialized at ✓0 into a convex optimization problem in three steps: (i) reinitialize the last layer of ✓0; (ii) for each data point x, compute the gradient eNTK(x; ✓0) := @@✓f1(x; ✓0); (iii) subsample the coordinates of eNTK(x; ✓0) for each x to obtain a reduced-dimensionality eNTK representation. Let S : RP ! Rp denote this subsampling operation. Finally, we solve the resulting linear regression problem over these eNTK representations.4 4.2 Convexifying Federated Learning via eNTK Representations The eNTK approximation lets us convexify the neural net optimization problem: following Section 4.1, we may extract (from a model trained with FedAvg) eNTK representations of inputs from each client. It remains to fit an overparameterized linear model using these eNTK features in a federated manner. For ease of presentation, we denote the subsampled eNTK representation of input x by z 2 Rp, where p is the eNTK feature dimension after subsampling. We use zki to represent the eNTK feature of the i-th sample from the k-th client. Then, for K the number of clients, Y ki the one-hot encoded labels, nk the number of data points of the k-th client, n := P k2[K] nk the number of data points across all clients, and pk := nk/n, we can approximate the nonconvex neural net optimization problem by the convex linear regression problem min W L(W ) := KX k=1 pk · Lk(W ), where Lk(W ) := 1 nk nkX i=1 kW>zki Y ki k22. (1) To obtain the eNTK representation z of an input x, we take ✓0 in Section 4.1 to be the weights of a model trained with FedAvg. As we will show in Section 5, the convex reformulation in Eq. (1) significantly reduces the number of communication rounds needed to find an optimal solution. 2For classification problems, we one-hot encoded labels and fit a linear model using squared loss. 3That such representations empirically have low effective dimension due to fast eigenvalue decay [see, e.g., 75] means that such a random projection approximately preserves the geometry of the data points [5, 83]. For all of our experiments, we set p = 100, 000. 4Given a fitted linear model with weights W 2 Rp⇥C , the prediction at x is argmaxj [W>S( eNTK(x))]j . 4.3 Train-Convexify-Train (TCT) We now present our algorithm train-convexify-train (TCT), with convexification done via the neural tangent kernel, for federated optimization. TCT — train-convexify-train with eNTK representations • Stage 1: Extract eNTK features from a FedAvg-trained model. FedAvg is first used to train the model for T1 communication rounds. Let ✓T1 denote the model weights after these T1 rounds. Then, each client locally computes subsampled eNTK features, i.e., zki = S( eNTK(xki ; ✓T1)) for k 2 [K] and i 2 [nk]. • Stage 2: Decentralized linear regression with gradient correction. Given samples {(zki , Y ki )} nk i=1 on each client k, first normalize the eNTK inputs of all clients with a single communication round.a Then, solve the linear regression problem defined in Eq. (1) by SCAFFOLD with local learning rate ⌘ and local steps M .b aFor every feature in the eNTK representation, subtract the mean and scale to unit variance. bThe detailed description of SCAFFOLD for solving linear regression problems can be found in Algorithm 1, Appendix A. It has the same communication and computation cost as FedAvg. To motivate TCT, recall that in Section 3 we found that FedAvg learns “useful” features despite its poor performance, especially in the earlier layers. By taking an eNTK approximation, TCT optimizes a convex approximation while using information from all layers of the model. Empirically, we find that these extracted eNTK features significantly reduce the number of communication rounds needed to learn a performant model, even with data heterogeneity. 5 Experiments We now study the performance of TCT for the decentralized training of deep neural networks in the presence of data heterogeneity. We compare TCT to state-of-the-art federated learning algorithms on three benchmark tasks in federated learning. For each task, we apply these algorithms on client data distributions with varying degrees of data heterogeneity. We find that our proposed approach significantly outperforms existing algorithms when clients have highly heterogeneous data across all tasks. For additional experimental results and implementation details, see Appendix B. Our code is available at https://github.com/yaodongyu/TCT. 5.1 Experimental Setup Datasets and degrees of data heterogeneity. We assess the performance of federated learning algorithms on the image classification tasks FMNIST [80], CIFAR10, and CIFAR100 [41]. FMNIST and CIFAR10 each consist of 10 classes, while CIFAR100 includes images from 100 classes. There are 60,000 training images in FMNIST, and 50,000 training images in CIFAR10/100. To vary the degree of data heterogeneity, we follow the setup of Li et al. [45]. We consider two types of non-i.i.d. data distribution: (i) Data heterogeneity sampled from a symmetric Dirichlet distribution with parameter ↵ [49, 71]. That is, we sample pc ⇠ DirK(↵) from a K-dimensional symmetric Dirichlet distribution and assign a pkc -fraction of the class c samples to client k. (Smaller ↵ corresponds to more heterogeneity.) (ii) Clients get samples from a fixed subset of classes [53]. That is, each client is allocated a subset of classes; then, the samples of each class are split into non-overlapping subsets and assigned to clients that were allocated this class. We use #C to denote the number of classes allocated to each client. For example, #C=2 means each client has samples from 2 classes. To allow for consistent comparisons, all of our experiments are run with 10 clients. Models. For FMNIST, we use a convolutional neural network with ReLU activations consisting of two convolutional layers with max pooling followed by two fully connected layers (SimpleCNN). For CIFAR10 and CIFAR100, we mainly consider an 18-layer residual network [25] with 4 basic residual blocks (ResNet-18). In Appendix B.2, we present experimental results for other architectures. Algorithms and training schemes. We compare TCT to state-of-the-art federated learning algorithms, focusing on the widely-used algorithms FedAvg [53], FedProx [48], and SCAFFOLD [39]. (For comparisons to additional algorithms, see Appendix B.1.) Each client uses SGD with weight decay 10 5 and batch size 64 by default. For each baseline method, we run it for 200 total communication rounds using 5 local training epochs with local learning rate selected from {0.1, 0.01, 0.001} by grid search. For TCT, we run 100 rounds of FedAvg in Stage 1 following the above and use 100 communication rounds in Stage 2 with M = 500 local steps and local learning rate ⌘ = 5 · 10 5. 5.2 Main Results Table 2 displays the top-1 accuracy of all algorithm on the three tasks with varying degrees of data heterogeneity. We evaluated each algorithms on each task under four degrees of data heterogeneity. Smaller #C and ↵ in Table 2 correspond to higher heterogeneity. We find that the existing federated algorithms all suffer when data heterogeneity is high across all three tasks. For example, the top-1 accuracy of FedAvg on CIFAR-10 is 56.86% when #C=2, which is much worse than the 90.43% achieved in a more homogeneous setting (e.g. ↵ = 0.5). In contrast, TCT achieves consistently strong performance, even in the face of high data heterogeneity. More specifically, TCT achieves the best top-1 accuracy performance across all settings except CIFAR-100 with ↵ = 0.5, where TCT does only slightly worse than SCAFFOLD. In absolute terms, we find that TCT is not affected much by data heterogeneity, with performance dropping by less than 1.5% on CIFAR100 as ↵ goes from 0.5 to 0.001. Moreover, our algorithm improves over existing methods by at least 15% in the challenging cases, including FMNIST with #C=1, CIFAR-10 with #C=1 and #C=2, and CIFAR-100 with ↵ = 0.01 and ↵ = 0.001. And, perhaps surprisingly, our algorithm still performs relatively well in the extreme non-i.i.d. setting where each client sees only a single class. Figure 2 compares the performances of FedAvg, SCAFFOLD, and TCT in more detail on CIFAR100 dataset with different degrees of data heterogeneity. We consider the Dirichlet distribution with parameter ↵ 2 {0.1, 0.01, 0.001} and compare the training and test accuracy of these three algorithms. As shown in Figures 2(a) and 2(b), both FedAvg and SCAFFOLD struggle when data heterogeneity is high: for both algorithms, test accuracy drops significantly when ↵ decreases. In contrast, we see from Figure 2(c) that TCT maintains almost the same test accuracy for different ↵. Furthermore, the same set of default parameters for our algorithm, including local learning rate and the number of local steps, is relatively robust to different levels of data heterogeneity. 5.3 Communication Efficiency To understand the effectiveness of the local steps in our algorithm, we compare SCAFFOLD (used in TCT-Stage 2) to full batch gradient descent (GD) applied to the overparameterized linear regression problem in Stage 2 of TCT on these datasets. For our algorithm, we set local steps M 2 {102, 103} and use the default local learning rate. For full batch GD, we vary the learning rate from 10 5 to 10 1 and visualize the ones that do not diverge. The results are summarized in Figure 3. Each dotted line with square markers in Figure 3 corresponds to full batch GD with some learning rate. Across all three datasets, our proposed algorithm consistently outperforms full batch GD. Meanwhile, we find that more local steps for our algorithms lead to faster convergence across all settings. In particular, our algorithm converges within 20 communication rounds on CIFAR100 (as shown in Figure 3(c)). These results suggest that our proposed algorithm can largely leverage the local computation and improve communication efficiency. 5.4 Ablations Gradient correction. We investigate the role of gradient correction when solving overparameterized linear regression with eNTK features in TCT. We compare SCAFFOLD (used in TCT) to FedAvg on solving the regression problems and summarize the results in Figure 4. We use the default local learning rate and consider three different numbers of local steps for both algorithms, i.e., M 2 {10, 100, 1000}. As shown in Figure 4, our approach largely outperforms FedAvg when the number of local steps is large (M 100) across three datasets. We also find that the performance of FedAvg can even degrade when the number of local steps increases. For example, FedAvg with M = 1000 performs the worst across all three datasets. In contrast to FedAvg, SCAFFOLD converges faster when the number of local steps increases. These observations highlight the importance of gradient correction in our algorithm. Model weights for computing eNTK features. To understand the impact of the model weights trained in Stage 1 of TCT, we evaluate TCT run with different T1 parameters. We consider T1 2 {0, 20, 40, 60, 80, 100}, where T1 = 0 corresponds to randomly initialized weights. From Figure 5(a), we find that weights after FedAvg training are much more effective than weights at random initialization. Specifically, without FedAvg training, the eNTK (at random initialization) performs worse than standard FedAvg. In contrast, TCT significantly outperforms FedAvg by a large margin (roughly 20% in test accuracy) when eNTK features are extracted from a FedAvg-trained model. Also, we find that TCT is stable with respect to the choice of communication rounds T1 in Stage 1. For example, models trained by TCT with T1 60 achieve similar performance. Effect of normalization. In Figure 5(b), we investigate the role of normalization on TCT by comparing TCT run with normalized and unnormalized eNTK features. The same number of local steps (M = 500) is applied for both settings. We tune the learning rate ⌘ for each setting and plot the run that performs best (as measured in training accuracy). The results in Figure 5(b) suggest that the normalization step in TCT significantly improves the communication efficiency by increasing convergence speed. In particular, TCT with normalization converges to nearly 100% training accuracy in approximately 40 communication rounds, which is much faster than TCT without normalization. Pre-training vs. Bootstrapping. In Appendix B.4, we explore the effect of starting from a pretrained model instead of relying on bootstrapping to learn the features. We find that pre-training further improves the performance of TCT and completely erases the gap between centralized and federated learning. Additionally, we conduct experiments on investigating the role of training loss function and subsampling approximation in TCT-Stage 2. For TCT-Stage 2, we find that neither using the cross-entropy loss as the training objective nor applying full eNTK representations significantly improves the performance of TCT. On the other hand, applying subsampling approximation in TCT-Stage 2 can largely improve the communication efficiency compared to the full eNTK representations approach. See Appendix B.7 for detailed experimental results. 6 Conclusion We have argued that nonconvexity poses a significant challenge for federated learning algorithms. We found that a neural network trained in such a manner does learn useful features, but fails to use them and thus has poor overall accuracy. To sidestep this issue, we proposed a train-convexify-train procedure: first, train the neural network using FedAvg; then, optimize (using SCAFFOLD) a convex approximation of the model obtained using its empirical neural tangent kernel. We showed that the first stage extracts meaningful features, whereas the second stage learns to utilize these features to obtain a highly performant model. The resulting algorithm is significantly faster and more stable to hyper-parameters than previous federated learning methods. Finally, we also showed that given a good pre-pretrained feature extractor, our convexify-train procedure fully closes the gap between centralized and federated learning. Our algorithm adds to the growing body of work using eNTK to linearize neural networks and obtain tractable convex approximations. However, unlike most of these past works which only work with pre-trained models, our bootstrapping allows training models from scratch. Finally, we stress that the success of our approach underscores the need to revisit theoretical understanding of heterogeneous federated learning. Nonconvexity seems to play an outsized role but its effect in FL has hitherto been unexplored. In particular, black-box notions of difficulty such as gradient dissimilarity or distances between client optima seem insufficient to capture practical performance. It is likely that further progress in the field (e.g. federated pre-training of foundational models), will require tackling the issue of nonconvexity head on. Acknowledgments and Disclosure of Funding We would like to thank the anonymous reviewers for their constructive suggestions and comments. Yaodong Yu acknowledges support from the joint Simons Foundation-NSF DMS grant #2031899. Alexander Wei acknowledges support from an NSF Graduate Research Fellowship under grant DGE2146752. Sai Praneeth Karimireddy acknowledges support of an SNSF postdoc mobility fellowship. Yi Ma acknowledges support from ONR grants N00014-20-1-2002 and N00014-22-12102 and the joint Simons Foundation-NSF DMS grant #2031899. Michael Jordan acknowledges support of the ONR Mathematical Data Science program.
1. What is the focus of the paper regarding federated learning challenges? 2. What are the strengths and weaknesses of the proposed two-stage training scheme? 3. Do you have any questions or concerns about the training process, model compilation, and optimization considerations? 4. How does the reviewer assess the significance of the work compared to prior works and its limitations?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper focuses on difficulty introduced by non-convexity and data heterogeneity in federated learning. Authors first show that, with data heterogeneity, linear models and convex optimization problem can be trained efficiently with gradient correction techniques such as SCAFFOLD, while non convex problem cannot. Then, in order to sidestep the non-convexity for neural network, authors substitute original model with a linear approximation and original loss with a quadratic loss, such that non-convex optimization problem turns into a convex regression problem based on NTK. Since feature learning is necessary for NTK align with data to provide good result, authors introduce two-stage training, where feature learning only happens in first stage, and convex optimization with gradient correction happens in second stage. Authors conduct experiments to show that such two-stage optimization can fit the data faster and produce better test accuracy. Strengths And Weaknesses Strength: This paper introduced a novel two-stage scheme to combine the feature learning capacity of neural network and for efficient optimization linear model. Weakness: The convexified problem is introduced mainly due to an optimization consideration: SCAFFOLD perform well on convex method. However, the convex formulation clearly sacrifice the model capacity. The feature learning can only happen in first stage (or non-convex part), thus BookNTK learns less feature than centralized training, which doesn't involve two-stage training. Questions If I understand correctly, the final training result for BookNTK is a neural network plus a linear model. Any inference requires a backward propagation to derive the feature for empirical NTK to apply a linear model. This is different from common workflow. Is there a way to compile the final network and the linear model into a single model that support inference with a single forward pass? Section 3 mentioned "the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization". However Figure 2 shows a different picture, where train acc and test acc have a large gap. What's the reason behind this discrepancy? Why quadratic loss instead of cross-entropy is used in (1)? How much worse is BookNTK than centralized training? Authors only mentioned one centralized baseline in section 3, where they show that "21% (out of the 35% gap in accuracy) may be attributed to a failure to optimize the linear output layer". However, for BookNTK, none of results in section 5 and appendix contains centralized training. I think this question is as important as "How much better is BookNTK than FedAvg". I strongly suggest authors to add centralized training result for a comparison. Limitations Yes.
NIPS
Title TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels Abstract State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network’s empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to +36% on FMNIST and +37% on CIFAR10 when clients have dissimilar data. 1 Introduction Federated learning is a newly emerging paradigm for machine learning where multiple data holders (clients) collaborate to train a model on their combined dataset. Clients only share partially trained models and other statistics computed from their dataset, keeping their raw data local and private [53, 37]. By obviating the need for a third party to collect and store clients’ data, federated learning has several advantages over the classical, centralized paradigm [14, 31, 23]: it ensures clients’ consent is tied to the specific task at hand by requiring active participation of the clients in training, confers some basic level of privacy, and has the potential to make machine learning more participatory in general [43, 36]. Further, widespread legislation of data portability and privacy requirements (such as GDPR and CCPA) might even make federated learning a necessity [59]. Collaboration among clients is most attractive when clients have very different subsets of the combined dataset (data heterogeneity). For example, different autonomous driving companies may only be able to collect data in weather conditions specific to their location, whereas their vehicles would need to function under all conditions. In such a scenario, it would be mutually beneficial for companies in geographically diverse locations to collaborate and share data with each other. Further, in such settings, clients are physically separated and connected by ad-hoc networks with large latencies and limited bandwidth. This is especially true when clients are edge devices such as mobile phones, IoT sensors, etc. Thus, communication efficiency is crucial for practical federated learning. However, it is precisely under such circumstances (large data heterogeneity and low communication) that current 36th Conference on Neural Information Processing Systems (NeurIPS 2022). algorithms fail dramatically [27, 48, 39, 61, 71, 1, 46, 3, 72, etc.]. This motivates our central question: Why do current federated methods fail in the face of data heterogeneity—and how can we fix them? Our solution. We make two main observations: (i) We show that, even with data heterogeneity, linear models can be trained in a federated manner through gradient correction techniques such as SCAFFOLD [39]. While this observation is promising, it alone remains limited, as linear models are not rich enough to solve practical problems of interest (e.g., those that require feature learning). (ii) We shed light on why current federated algorithms struggle to train deep, nonconvex models. We observe that the failure of existing methods for neural networks is not uniform across the layers. The early layers of the network do in fact learn useful features, but the final layers fail to make use of them. Specifically, federated optimization applied to this nonconvex problem results in distorted final layers. These observations suggest a train-convexify-train federated algorithm, which we call TCT: first, use any off-the-shelf federated algorithm [such as FedAvg, 53] to train a deep model to extract useful features; then, compute a convex approximation of the deep model using its empirical Neural Tangent Kernel (eNTK) [34, 44, 20, 51, 75], and use gradient correction methods such as SCAFFOLD to train the final model. Effectively, the second-stage features freeze the features learned in the first stage and fit a linear model over them. We show that this simple strategy is highly performant on a variety of tasks and models—we obtain accuracy gains up to 36% points on FMNIST with a CNN, 37% points on CIFAR10 with ResNet18-GN, and 16% points on CIFAR100 with ResNet18-GN. Further, its convergence remains unaffected even by extreme data heterogeneity. Finally, we show that given a pre-trained model, our method completely closes the gap between centralized and federated methods. 2 Related Work Federated learning. There are two main motivating scenarios for federated learning (FL). The first is where internet service companies (e.g., Google, Facebook, Apple, etc.) want to train machine learning models over their users’ data, but do not want to transmit raw personalized data away from user devices [60, 8]. This is the setting of cross-device federated learning and is characterized by an extremely large number of unreliable clients, each of whom has very little data and the collections of data are assumed to be homogeneous [37, 10, 38, 8]. The second motivating scenario is when valuable data is split across different organizations, each of whom is either protected by privacy regulation or is simply unwilling to share their raw data. Such “data islands” are common among hospital networks, financial institutions, autonomous-vehicle companies, etc. This is known as cross-silo federated learning and is characterized by a few highly reliable clients, who potentially have extremely diverse data. In this work, we focus on the latter scenario. Metrics in FL. FL research considers numerous metrics, such as fairness across users [55, 47, 62], formal security and privacy guarantees [9, 60, 21, 56], robustness to corrupted agents and corrupted training data [7, 64, 19, 40, 26], preventing backdoors at test time [6, 66, 69, 52], etc. While these concerns are important, the main goal of FL (and our work) is to achieve high accuracy with minimal communication [53]. Clients are typically geographically separated yet need to communicate large deep learning models over unoptimized ad-hoc networks [37]. Finally, we focus on the setting where all users are interested in training the same model over the combined dataset. This is in contrast to model-agnostic protocols [49, 58, 3] or personalized federated learning [16, 18, 78, 13, 42, 12]. Finally, we focus on minimizing the number of rounds required. Our approach can be combined with communication compression, which reduces bits sent per round [67, 4, 24, 65]. Federated optimization. Algorithms for FL proceed in rounds. In each round, the server sends a model to the clients, who partially train this model using their local compute and data. The clients send these partially trained models back to the server who then aggregates them, finishing a round. FedAvg [53], which is the de facto standard FL algorithm, uses SGD to perform local updates on the clients and aggregates the client models by simply averaging their parameters. Unfortunately, however, FedAvg has been observed to perform poorly when faced with data heterogeneity across the clients [27, 48, 39, 61, 71, 1, 46, 3, 72, 17, etc.]. Theoretical investigations of this phenomenon [39, 76] showed that this was a result of gradient heterogeneity across the clients. Consider FedAvg initialized with the globally optimal model. If this model is not also optimal for each of the clients as well, the local updates will push it away from the global optimum. Thus, convergence would require a careful tuning of hyper-parameters. To overcome this issue, SCAFFOLD [39] and FedDyn [1] propose to use control variates to correct for the biases of the individual clients akin to variance reduction [35, 15]. This gradient correction is applied in every local update by the client and provably nullifies the effect of gradient heterogeneity [39, 54, 12]. However, as we show here, such methods are insufficient to overcome high data heterogeneity especially for deep learning. Other, more heuristic approaches to combat gradient heterogeneity include using a regularizer [48] and sophisticated server aggregation strategies such as momentum [28, 70, 50] or adaptivity [61, 38, 11]. A second line of work pins the blame on performance loss due to averaging nonconvex models. To overcome this, Singh and Jaggi [63], Yu et al. [81] propose to learn a mapping between the weights of the client models before averaging, Afonin and Karimireddy [3] advocates a functional perspective and replaces the averaging step with knowledge distillation, and Wang et al. [74], Li et al. [46], Tan et al. [68] attempt to align the internal representations of the client models. However, averaging is unlikely to be the only culprit since FedAvg does succeed under low heterogeneity, and averaging nonconvex models can lead to improved performance [33, 77]. Neural Tangent Kernels (NTK) and neural network linearization. NTK was first proposed to analyze the limiting behavior of infinitely wide networks [34, 44]. While NTK with MSE may be a bad approximation of real-world finite networks in general [22], it approximates the fine-tuning of a pre-trained network well [57], especially with some minor modifications [2]. That is, NTK cannot capture feature learning but does capture how a model utilizes learnt features better than last/mid layer activations. 3 The Effect of Nonconvexity In this section, we investigate the poor performance of FedAvg [53] and SCAFFOLD [39] empirically in the setting of deep neural networks, focusing on image classification with a ResNet-18. To construct our federated learning setup, we split the CIFAR-10 dataset in a highly heterogeneous manner among ten clients. We either assign each client two classes (denoted by #C=2) or distribute samples according to a Dirichlet distribution with ↵ = 0.1 (denoted by ↵=0.1). For more details, see Section 5.1. Insufficiency of gradient correction methods. Current theoretical work [e.g., 39, 61, 1, 73] attributes the slowdown from data heterogeneity to the individual clients having varying local optima. If no single model is simultaneously optimal for all clients, then the updates of different clients can compete with and distort each other, leading to slow convergence. This tension is captured by the variance of the updates across the clients [client gradient heterogeneity, see 72]. Gradient correction methods such as SCAFFOLD [39] and FedDyn [1] explicitly correct for this and are provably unaffected by gradient heterogeneity for both convex and nonconvex losses. These theoretical predictions are aligned with the results of Figure 1(a), where the loss landscape is convex: SCAFFOLD is relatively unaffected by the level of heterogeneity and consistently outperforms FedAvg. In particular, performance is largely dictated by the algorithm and not the data distributions. This shows that client gradient heterogeneity captures the difficulty of the problem well. On the other hand, when training a ResNet-18 model with nonconvex loss landscape, Figure 1(b) shows that both FedAvg and SCAFFOLD suffer from data heterogeneity. This is despite the theory of gradient correction applying to both convex and nonconvex losses. Further, the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization (not fitting the training data) rather than generalization. Thus, while the current theory makes no qualitative distinctions between convex and nonconvex convergence, the practical behavior of algorithms in these settings is very different. Such differences between theoretical predictions and practical reality suggests that black-box notions such as gradient heterogeneity are insufficient for capturing the difficulty of training deep models. Ease of feature learning. We now dive into how a ResNet-18 trained with FedAvg (56.9% accuracy) differs from the centralized baseline (91.9% accuracy). We first apply linear probing to the FedAvg model (i.e., retraining with all but the output layer frozen). Note that this is equivalent to (convex) logistic regression over the last-layer activations. This simple procedure produces a striking jump from 56.9% to 77.9% accuracy. Thus, of the 35% gap in accuracy between the FedAvg and centralized models, 21% may be attributed to a failure to optimize the linear output layer. We next extend this experiment towards probing the information content of other layers. Given a FedAvg-trained model, we can use centralized training to retrain only the last ` layers while keeping the rest of the (7 `) layers (or ResNet blocks) frozen. We can also perform this procedure starting from a randomly initialized model. The performance difference between these two models can be attributed to the information content of the frozen (7 `) layers of the FedAvg model. Table 1 summarizes the results of this experiment. The large difference in accuracy (up to 42.6%) indicates the initial layers of the FedAvg model have learned useful features. There continues to be a gap between the FedAvg features and random features in the earlier layers as well,1 meaning that all layers of the FedAvg model learn useful features. We conjecture this is because from the perspective of earlier layers which perform simple edge detection, the tasks are independent of labels and the clients are i.i.d. However, the higher layers are more specialized and the effect of the heterogeneity is stronger. 4 Method Based on the observations in Section 3, we propose train-convexify-train (TCT) as a method for overcoming data heterogeneity when training deep models in a federated setting. Our high-level 1The significant decrease in the gap as we go down the layers may be because of the skip connections in the lower ResNet blocks which allow the random frozen layers to be sidestepped. This underestimates the true utility and information content in the earlier FedAvg layers. intuition is that we want to leverage both the features learned from applying FedAvg to neural networks and the effectiveness of convex federated optimization. More specifically, we perform several rounds of “bootstrap” FedAvg to learn features before solving a convexified version of the original optimization problem. 4.1 Computing the Empirical Neural Tangent Kernel To sidestep the challenges presented by nonconvexity, we describe how we approximate a neural network by its “linearization.” Given a neural network f( · ; ✓0) with weights ✓0 2 RP mapping inputs x 2 RD to RC , we replace it by its empirical neural tangent kernel (eNTK) approximation at ✓0 given by f(x; ✓) ⇡ f(x; ✓0) + (✓ ✓0)> @ @✓ f(x; ✓0), at each x 2 RD. Under this approximation, f(x; ✓) is a linear function of the “feature vector” (f(x; ✓0), @ @✓f(x; ✓0)) and the original nonconvex optimization problem becomes (convex) linear regression with respect to these features.2 Leveraging NTK for solving federated optimization problems has also been studied in previous work [29, 82]. To reduce the computational burden of working with the eNTK approximation, we make two further approximations: First, we randomly reinitialize the last layer of ✓0 and only consider @@✓f(x; ✓0) with respect to a single output logit. Over the randomness of this reinitialization, E[f(x; ✓0)] = 0. Moreover, given the random reinitialization, all the output logits of f(x; ✓0) are symmetric. These observations mean each data point x can be represented by a P -dimensional feature vector @ @✓f1(x; ✓0), where f1( · ; ✓0) refers to the first output logit. Then, we apply a dimensionality reduction by subsampling p random coordinates from this P -dimensional featurization.3 In our setting, this sub-sampling has the added benefit of reducing the number of bits communicated per round. In summary, we transform our original (nonconvex) optimization problem over a neural network initialized at ✓0 into a convex optimization problem in three steps: (i) reinitialize the last layer of ✓0; (ii) for each data point x, compute the gradient eNTK(x; ✓0) := @@✓f1(x; ✓0); (iii) subsample the coordinates of eNTK(x; ✓0) for each x to obtain a reduced-dimensionality eNTK representation. Let S : RP ! Rp denote this subsampling operation. Finally, we solve the resulting linear regression problem over these eNTK representations.4 4.2 Convexifying Federated Learning via eNTK Representations The eNTK approximation lets us convexify the neural net optimization problem: following Section 4.1, we may extract (from a model trained with FedAvg) eNTK representations of inputs from each client. It remains to fit an overparameterized linear model using these eNTK features in a federated manner. For ease of presentation, we denote the subsampled eNTK representation of input x by z 2 Rp, where p is the eNTK feature dimension after subsampling. We use zki to represent the eNTK feature of the i-th sample from the k-th client. Then, for K the number of clients, Y ki the one-hot encoded labels, nk the number of data points of the k-th client, n := P k2[K] nk the number of data points across all clients, and pk := nk/n, we can approximate the nonconvex neural net optimization problem by the convex linear regression problem min W L(W ) := KX k=1 pk · Lk(W ), where Lk(W ) := 1 nk nkX i=1 kW>zki Y ki k22. (1) To obtain the eNTK representation z of an input x, we take ✓0 in Section 4.1 to be the weights of a model trained with FedAvg. As we will show in Section 5, the convex reformulation in Eq. (1) significantly reduces the number of communication rounds needed to find an optimal solution. 2For classification problems, we one-hot encoded labels and fit a linear model using squared loss. 3That such representations empirically have low effective dimension due to fast eigenvalue decay [see, e.g., 75] means that such a random projection approximately preserves the geometry of the data points [5, 83]. For all of our experiments, we set p = 100, 000. 4Given a fitted linear model with weights W 2 Rp⇥C , the prediction at x is argmaxj [W>S( eNTK(x))]j . 4.3 Train-Convexify-Train (TCT) We now present our algorithm train-convexify-train (TCT), with convexification done via the neural tangent kernel, for federated optimization. TCT — train-convexify-train with eNTK representations • Stage 1: Extract eNTK features from a FedAvg-trained model. FedAvg is first used to train the model for T1 communication rounds. Let ✓T1 denote the model weights after these T1 rounds. Then, each client locally computes subsampled eNTK features, i.e., zki = S( eNTK(xki ; ✓T1)) for k 2 [K] and i 2 [nk]. • Stage 2: Decentralized linear regression with gradient correction. Given samples {(zki , Y ki )} nk i=1 on each client k, first normalize the eNTK inputs of all clients with a single communication round.a Then, solve the linear regression problem defined in Eq. (1) by SCAFFOLD with local learning rate ⌘ and local steps M .b aFor every feature in the eNTK representation, subtract the mean and scale to unit variance. bThe detailed description of SCAFFOLD for solving linear regression problems can be found in Algorithm 1, Appendix A. It has the same communication and computation cost as FedAvg. To motivate TCT, recall that in Section 3 we found that FedAvg learns “useful” features despite its poor performance, especially in the earlier layers. By taking an eNTK approximation, TCT optimizes a convex approximation while using information from all layers of the model. Empirically, we find that these extracted eNTK features significantly reduce the number of communication rounds needed to learn a performant model, even with data heterogeneity. 5 Experiments We now study the performance of TCT for the decentralized training of deep neural networks in the presence of data heterogeneity. We compare TCT to state-of-the-art federated learning algorithms on three benchmark tasks in federated learning. For each task, we apply these algorithms on client data distributions with varying degrees of data heterogeneity. We find that our proposed approach significantly outperforms existing algorithms when clients have highly heterogeneous data across all tasks. For additional experimental results and implementation details, see Appendix B. Our code is available at https://github.com/yaodongyu/TCT. 5.1 Experimental Setup Datasets and degrees of data heterogeneity. We assess the performance of federated learning algorithms on the image classification tasks FMNIST [80], CIFAR10, and CIFAR100 [41]. FMNIST and CIFAR10 each consist of 10 classes, while CIFAR100 includes images from 100 classes. There are 60,000 training images in FMNIST, and 50,000 training images in CIFAR10/100. To vary the degree of data heterogeneity, we follow the setup of Li et al. [45]. We consider two types of non-i.i.d. data distribution: (i) Data heterogeneity sampled from a symmetric Dirichlet distribution with parameter ↵ [49, 71]. That is, we sample pc ⇠ DirK(↵) from a K-dimensional symmetric Dirichlet distribution and assign a pkc -fraction of the class c samples to client k. (Smaller ↵ corresponds to more heterogeneity.) (ii) Clients get samples from a fixed subset of classes [53]. That is, each client is allocated a subset of classes; then, the samples of each class are split into non-overlapping subsets and assigned to clients that were allocated this class. We use #C to denote the number of classes allocated to each client. For example, #C=2 means each client has samples from 2 classes. To allow for consistent comparisons, all of our experiments are run with 10 clients. Models. For FMNIST, we use a convolutional neural network with ReLU activations consisting of two convolutional layers with max pooling followed by two fully connected layers (SimpleCNN). For CIFAR10 and CIFAR100, we mainly consider an 18-layer residual network [25] with 4 basic residual blocks (ResNet-18). In Appendix B.2, we present experimental results for other architectures. Algorithms and training schemes. We compare TCT to state-of-the-art federated learning algorithms, focusing on the widely-used algorithms FedAvg [53], FedProx [48], and SCAFFOLD [39]. (For comparisons to additional algorithms, see Appendix B.1.) Each client uses SGD with weight decay 10 5 and batch size 64 by default. For each baseline method, we run it for 200 total communication rounds using 5 local training epochs with local learning rate selected from {0.1, 0.01, 0.001} by grid search. For TCT, we run 100 rounds of FedAvg in Stage 1 following the above and use 100 communication rounds in Stage 2 with M = 500 local steps and local learning rate ⌘ = 5 · 10 5. 5.2 Main Results Table 2 displays the top-1 accuracy of all algorithm on the three tasks with varying degrees of data heterogeneity. We evaluated each algorithms on each task under four degrees of data heterogeneity. Smaller #C and ↵ in Table 2 correspond to higher heterogeneity. We find that the existing federated algorithms all suffer when data heterogeneity is high across all three tasks. For example, the top-1 accuracy of FedAvg on CIFAR-10 is 56.86% when #C=2, which is much worse than the 90.43% achieved in a more homogeneous setting (e.g. ↵ = 0.5). In contrast, TCT achieves consistently strong performance, even in the face of high data heterogeneity. More specifically, TCT achieves the best top-1 accuracy performance across all settings except CIFAR-100 with ↵ = 0.5, where TCT does only slightly worse than SCAFFOLD. In absolute terms, we find that TCT is not affected much by data heterogeneity, with performance dropping by less than 1.5% on CIFAR100 as ↵ goes from 0.5 to 0.001. Moreover, our algorithm improves over existing methods by at least 15% in the challenging cases, including FMNIST with #C=1, CIFAR-10 with #C=1 and #C=2, and CIFAR-100 with ↵ = 0.01 and ↵ = 0.001. And, perhaps surprisingly, our algorithm still performs relatively well in the extreme non-i.i.d. setting where each client sees only a single class. Figure 2 compares the performances of FedAvg, SCAFFOLD, and TCT in more detail on CIFAR100 dataset with different degrees of data heterogeneity. We consider the Dirichlet distribution with parameter ↵ 2 {0.1, 0.01, 0.001} and compare the training and test accuracy of these three algorithms. As shown in Figures 2(a) and 2(b), both FedAvg and SCAFFOLD struggle when data heterogeneity is high: for both algorithms, test accuracy drops significantly when ↵ decreases. In contrast, we see from Figure 2(c) that TCT maintains almost the same test accuracy for different ↵. Furthermore, the same set of default parameters for our algorithm, including local learning rate and the number of local steps, is relatively robust to different levels of data heterogeneity. 5.3 Communication Efficiency To understand the effectiveness of the local steps in our algorithm, we compare SCAFFOLD (used in TCT-Stage 2) to full batch gradient descent (GD) applied to the overparameterized linear regression problem in Stage 2 of TCT on these datasets. For our algorithm, we set local steps M 2 {102, 103} and use the default local learning rate. For full batch GD, we vary the learning rate from 10 5 to 10 1 and visualize the ones that do not diverge. The results are summarized in Figure 3. Each dotted line with square markers in Figure 3 corresponds to full batch GD with some learning rate. Across all three datasets, our proposed algorithm consistently outperforms full batch GD. Meanwhile, we find that more local steps for our algorithms lead to faster convergence across all settings. In particular, our algorithm converges within 20 communication rounds on CIFAR100 (as shown in Figure 3(c)). These results suggest that our proposed algorithm can largely leverage the local computation and improve communication efficiency. 5.4 Ablations Gradient correction. We investigate the role of gradient correction when solving overparameterized linear regression with eNTK features in TCT. We compare SCAFFOLD (used in TCT) to FedAvg on solving the regression problems and summarize the results in Figure 4. We use the default local learning rate and consider three different numbers of local steps for both algorithms, i.e., M 2 {10, 100, 1000}. As shown in Figure 4, our approach largely outperforms FedAvg when the number of local steps is large (M 100) across three datasets. We also find that the performance of FedAvg can even degrade when the number of local steps increases. For example, FedAvg with M = 1000 performs the worst across all three datasets. In contrast to FedAvg, SCAFFOLD converges faster when the number of local steps increases. These observations highlight the importance of gradient correction in our algorithm. Model weights for computing eNTK features. To understand the impact of the model weights trained in Stage 1 of TCT, we evaluate TCT run with different T1 parameters. We consider T1 2 {0, 20, 40, 60, 80, 100}, where T1 = 0 corresponds to randomly initialized weights. From Figure 5(a), we find that weights after FedAvg training are much more effective than weights at random initialization. Specifically, without FedAvg training, the eNTK (at random initialization) performs worse than standard FedAvg. In contrast, TCT significantly outperforms FedAvg by a large margin (roughly 20% in test accuracy) when eNTK features are extracted from a FedAvg-trained model. Also, we find that TCT is stable with respect to the choice of communication rounds T1 in Stage 1. For example, models trained by TCT with T1 60 achieve similar performance. Effect of normalization. In Figure 5(b), we investigate the role of normalization on TCT by comparing TCT run with normalized and unnormalized eNTK features. The same number of local steps (M = 500) is applied for both settings. We tune the learning rate ⌘ for each setting and plot the run that performs best (as measured in training accuracy). The results in Figure 5(b) suggest that the normalization step in TCT significantly improves the communication efficiency by increasing convergence speed. In particular, TCT with normalization converges to nearly 100% training accuracy in approximately 40 communication rounds, which is much faster than TCT without normalization. Pre-training vs. Bootstrapping. In Appendix B.4, we explore the effect of starting from a pretrained model instead of relying on bootstrapping to learn the features. We find that pre-training further improves the performance of TCT and completely erases the gap between centralized and federated learning. Additionally, we conduct experiments on investigating the role of training loss function and subsampling approximation in TCT-Stage 2. For TCT-Stage 2, we find that neither using the cross-entropy loss as the training objective nor applying full eNTK representations significantly improves the performance of TCT. On the other hand, applying subsampling approximation in TCT-Stage 2 can largely improve the communication efficiency compared to the full eNTK representations approach. See Appendix B.7 for detailed experimental results. 6 Conclusion We have argued that nonconvexity poses a significant challenge for federated learning algorithms. We found that a neural network trained in such a manner does learn useful features, but fails to use them and thus has poor overall accuracy. To sidestep this issue, we proposed a train-convexify-train procedure: first, train the neural network using FedAvg; then, optimize (using SCAFFOLD) a convex approximation of the model obtained using its empirical neural tangent kernel. We showed that the first stage extracts meaningful features, whereas the second stage learns to utilize these features to obtain a highly performant model. The resulting algorithm is significantly faster and more stable to hyper-parameters than previous federated learning methods. Finally, we also showed that given a good pre-pretrained feature extractor, our convexify-train procedure fully closes the gap between centralized and federated learning. Our algorithm adds to the growing body of work using eNTK to linearize neural networks and obtain tractable convex approximations. However, unlike most of these past works which only work with pre-trained models, our bootstrapping allows training models from scratch. Finally, we stress that the success of our approach underscores the need to revisit theoretical understanding of heterogeneous federated learning. Nonconvexity seems to play an outsized role but its effect in FL has hitherto been unexplored. In particular, black-box notions of difficulty such as gradient dissimilarity or distances between client optima seem insufficient to capture practical performance. It is likely that further progress in the field (e.g. federated pre-training of foundational models), will require tackling the issue of nonconvexity head on. Acknowledgments and Disclosure of Funding We would like to thank the anonymous reviewers for their constructive suggestions and comments. Yaodong Yu acknowledges support from the joint Simons Foundation-NSF DMS grant #2031899. Alexander Wei acknowledges support from an NSF Graduate Research Fellowship under grant DGE2146752. Sai Praneeth Karimireddy acknowledges support of an SNSF postdoc mobility fellowship. Yi Ma acknowledges support from ONR grants N00014-20-1-2002 and N00014-22-12102 and the joint Simons Foundation-NSF DMS grant #2031899. Michael Jordan acknowledges support of the ONR Mathematical Data Science program.
1. What is the main contribution of the paper, and how does it address the problem of non-convex models in federated learning? 2. What are the strengths and weaknesses of the proposed method, particularly regarding its simplicity and performance in cross-silo settings? 3. How does the method handle different sources of non-iidness, such as label skew and covariate shift? 4. What is the impact of the linear model's reduced flexibility compared to the non-convex neural network, and how does it affect the method's performance? 5. How do various approximations involved in the computation of the eNTK features affect the final performance, and what is the impact of each approximation? 6. How would the method perform without the approximations to the eNTK, and what is the motivation behind using an MSE loss on an one-hot representation of the targets? 7. Are there any potential limitations or negative aspects of the method that the authors could have discussed further?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This work proposes BooNTK; a two stage approach for FL that allows for better performance in cross-silo settings. The main idea is to 1) perform standard FedAvg training on a non-convex model, such as a neural network, then 2) approximate it with a first order Taylor approximation around the parameters found and finally 3) optimize the parameters of this linear model with federated training using optimisers with gradient correction, such as SCAFFOLD. The authors motivate this through the optimization difficulties of non-convex models in the non-iid setting; empirically, the loss of performance when moving to non-idd data is larger for non-convex model compared to convex. The authors describe several further approximations that they do to make the method more efficient (i.e., removing the bias term of the Taylor approximation and only considering a subset of the coordinates of the gradient vector) and then demonstrate BooNTK’s performance on three label skew non-iid settings on FMNIST, CIFAR10 and CIFAR100. Strengths And Weaknesses Strengths Good results on the cross-silo setting There is significant improvement by the BooNTK pipeline on all tasks involving a varying amount of label skew. Intuitive ablations and method exposition The method is explained / motivated well and I liked the layer importance investigation. The ablation studies are also useful and highlight the sensitivity of the method on the choice of (some) hyper parameters. Simple method The method is quite simple and straightforward. As a bonus, the second step is also computationally more efficient than training the original neural network. Weaknesses: No discussion about how different non-iid ness settings affect the method Given the claims of the work about the negative effects of data heterogeneity in the non-convex setting, I would have expected that the authors experimented with more diverse non-iid settings (i.e., not just label skew). As the current label-skew experiments concern mostly different marginals over the labels, p ( y ) , at each client, a better adjusted classification layer is important. I believe this is one of the reasons that BooNTK is effective in these scenarios. However, not all non-iidness is just label skew; for example, you could consider an image recognition system running in different mobile phones. As each phone comes with its own camera sensor, covariate shift (i.e., different p ( x ) across clients) can also manifest as a source of non-iidness. In this case, I would intuitively expect that the distribution of the features also differs among clients, therefore, adjusting just the classifier might not be enough. No discussion about how the linear model is less flexible The linear model is less flexible than the non-convex neural network. As a result, its improved performance (on a given feature set) could be just because it has less degrees of freedom to adapt to the non-i.i.d. peculiarities. I believe a discussion around this is missing. There are several approximations involved in the computation of the eNTK features, the impact of which is unclear In section 4.1 the authors describe a series of approximations to eNTK in order to reduce the computational burden, however there is no (empirical) evaluation on how each one affects the final performance. Questions Most of my questions revolve around the weaknesses described above. The authors do not take into account that the linear model is less flexible by design, therefore it is harder to fit the non-iid peculiarities (and could explain its improved performance). I believe a control experiment would highlight whether this can be an issue in practice. For example, one could do a heavily regularised non-convex model to see if the gap is shrinking in Figure 1. How would, e.g., the performance of a pipeline similar to BooNTK fare if instead of the linear model fitting step (i.e., stage 2), one just switched from FedAvg to FedProx with a strong regularisation strength for stage 2? I believe that the effect of various sources of non-iidness on BooNTK should be investigated, as the experiments consider only label skew (where generally adjusting the classifier only is sufficient). What happens when there is, e.g., covariate shift? Intuitively, this should affect the earlier layers more. I believe some more investigation on the eNTK part of BooNTK is required. For example, how does finetuning just the classification layer (while keeping the rest of the network frozen) with SCAFFOLD work, relative to the eNTK approach (which requires further approximations)? Furthermore, how does BooNTK perform without the approximations to the eNTK (i.e., do the approximations have beneficial regularisation effects or are they detrimental for performance)? The authors consider classification tasks, however for the second stage of BooNTK, they consider an MSE loss on an one-hot representation of the targets. What is the motivation of the MSE loss? Intuitively, you should be able to apply a softmax on the linear model of the first eq. at sec. 4.1 to get a logistic regression model, which is more appropriate for classification. Limitations The authors could have spent a bit more time discussing potentially negative aspects of their method (e.g., the linearity of the model in the second stage).
NIPS
Title TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels Abstract State-of-the-art federated learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions. For neural networks, even when centralized SGD easily finds a solution that is simultaneously performant for all clients, current federated optimization methods fail to converge to a comparable solution. We show that this performance disparity can largely be attributed to optimization challenges presented by nonconvexity. Specifically, we find that the early layers of the network do learn useful features, but the final layers fail to make use of them. That is, federated optimization applied to this non-convex problem distorts the learning of the final layers. Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e.g., FedAvg); then, optimize a convexified problem obtained from the network’s empirical neural tangent kernel approximation. Our technique yields accuracy improvements of up to +36% on FMNIST and +37% on CIFAR10 when clients have dissimilar data. 1 Introduction Federated learning is a newly emerging paradigm for machine learning where multiple data holders (clients) collaborate to train a model on their combined dataset. Clients only share partially trained models and other statistics computed from their dataset, keeping their raw data local and private [53, 37]. By obviating the need for a third party to collect and store clients’ data, federated learning has several advantages over the classical, centralized paradigm [14, 31, 23]: it ensures clients’ consent is tied to the specific task at hand by requiring active participation of the clients in training, confers some basic level of privacy, and has the potential to make machine learning more participatory in general [43, 36]. Further, widespread legislation of data portability and privacy requirements (such as GDPR and CCPA) might even make federated learning a necessity [59]. Collaboration among clients is most attractive when clients have very different subsets of the combined dataset (data heterogeneity). For example, different autonomous driving companies may only be able to collect data in weather conditions specific to their location, whereas their vehicles would need to function under all conditions. In such a scenario, it would be mutually beneficial for companies in geographically diverse locations to collaborate and share data with each other. Further, in such settings, clients are physically separated and connected by ad-hoc networks with large latencies and limited bandwidth. This is especially true when clients are edge devices such as mobile phones, IoT sensors, etc. Thus, communication efficiency is crucial for practical federated learning. However, it is precisely under such circumstances (large data heterogeneity and low communication) that current 36th Conference on Neural Information Processing Systems (NeurIPS 2022). algorithms fail dramatically [27, 48, 39, 61, 71, 1, 46, 3, 72, etc.]. This motivates our central question: Why do current federated methods fail in the face of data heterogeneity—and how can we fix them? Our solution. We make two main observations: (i) We show that, even with data heterogeneity, linear models can be trained in a federated manner through gradient correction techniques such as SCAFFOLD [39]. While this observation is promising, it alone remains limited, as linear models are not rich enough to solve practical problems of interest (e.g., those that require feature learning). (ii) We shed light on why current federated algorithms struggle to train deep, nonconvex models. We observe that the failure of existing methods for neural networks is not uniform across the layers. The early layers of the network do in fact learn useful features, but the final layers fail to make use of them. Specifically, federated optimization applied to this nonconvex problem results in distorted final layers. These observations suggest a train-convexify-train federated algorithm, which we call TCT: first, use any off-the-shelf federated algorithm [such as FedAvg, 53] to train a deep model to extract useful features; then, compute a convex approximation of the deep model using its empirical Neural Tangent Kernel (eNTK) [34, 44, 20, 51, 75], and use gradient correction methods such as SCAFFOLD to train the final model. Effectively, the second-stage features freeze the features learned in the first stage and fit a linear model over them. We show that this simple strategy is highly performant on a variety of tasks and models—we obtain accuracy gains up to 36% points on FMNIST with a CNN, 37% points on CIFAR10 with ResNet18-GN, and 16% points on CIFAR100 with ResNet18-GN. Further, its convergence remains unaffected even by extreme data heterogeneity. Finally, we show that given a pre-trained model, our method completely closes the gap between centralized and federated methods. 2 Related Work Federated learning. There are two main motivating scenarios for federated learning (FL). The first is where internet service companies (e.g., Google, Facebook, Apple, etc.) want to train machine learning models over their users’ data, but do not want to transmit raw personalized data away from user devices [60, 8]. This is the setting of cross-device federated learning and is characterized by an extremely large number of unreliable clients, each of whom has very little data and the collections of data are assumed to be homogeneous [37, 10, 38, 8]. The second motivating scenario is when valuable data is split across different organizations, each of whom is either protected by privacy regulation or is simply unwilling to share their raw data. Such “data islands” are common among hospital networks, financial institutions, autonomous-vehicle companies, etc. This is known as cross-silo federated learning and is characterized by a few highly reliable clients, who potentially have extremely diverse data. In this work, we focus on the latter scenario. Metrics in FL. FL research considers numerous metrics, such as fairness across users [55, 47, 62], formal security and privacy guarantees [9, 60, 21, 56], robustness to corrupted agents and corrupted training data [7, 64, 19, 40, 26], preventing backdoors at test time [6, 66, 69, 52], etc. While these concerns are important, the main goal of FL (and our work) is to achieve high accuracy with minimal communication [53]. Clients are typically geographically separated yet need to communicate large deep learning models over unoptimized ad-hoc networks [37]. Finally, we focus on the setting where all users are interested in training the same model over the combined dataset. This is in contrast to model-agnostic protocols [49, 58, 3] or personalized federated learning [16, 18, 78, 13, 42, 12]. Finally, we focus on minimizing the number of rounds required. Our approach can be combined with communication compression, which reduces bits sent per round [67, 4, 24, 65]. Federated optimization. Algorithms for FL proceed in rounds. In each round, the server sends a model to the clients, who partially train this model using their local compute and data. The clients send these partially trained models back to the server who then aggregates them, finishing a round. FedAvg [53], which is the de facto standard FL algorithm, uses SGD to perform local updates on the clients and aggregates the client models by simply averaging their parameters. Unfortunately, however, FedAvg has been observed to perform poorly when faced with data heterogeneity across the clients [27, 48, 39, 61, 71, 1, 46, 3, 72, 17, etc.]. Theoretical investigations of this phenomenon [39, 76] showed that this was a result of gradient heterogeneity across the clients. Consider FedAvg initialized with the globally optimal model. If this model is not also optimal for each of the clients as well, the local updates will push it away from the global optimum. Thus, convergence would require a careful tuning of hyper-parameters. To overcome this issue, SCAFFOLD [39] and FedDyn [1] propose to use control variates to correct for the biases of the individual clients akin to variance reduction [35, 15]. This gradient correction is applied in every local update by the client and provably nullifies the effect of gradient heterogeneity [39, 54, 12]. However, as we show here, such methods are insufficient to overcome high data heterogeneity especially for deep learning. Other, more heuristic approaches to combat gradient heterogeneity include using a regularizer [48] and sophisticated server aggregation strategies such as momentum [28, 70, 50] or adaptivity [61, 38, 11]. A second line of work pins the blame on performance loss due to averaging nonconvex models. To overcome this, Singh and Jaggi [63], Yu et al. [81] propose to learn a mapping between the weights of the client models before averaging, Afonin and Karimireddy [3] advocates a functional perspective and replaces the averaging step with knowledge distillation, and Wang et al. [74], Li et al. [46], Tan et al. [68] attempt to align the internal representations of the client models. However, averaging is unlikely to be the only culprit since FedAvg does succeed under low heterogeneity, and averaging nonconvex models can lead to improved performance [33, 77]. Neural Tangent Kernels (NTK) and neural network linearization. NTK was first proposed to analyze the limiting behavior of infinitely wide networks [34, 44]. While NTK with MSE may be a bad approximation of real-world finite networks in general [22], it approximates the fine-tuning of a pre-trained network well [57], especially with some minor modifications [2]. That is, NTK cannot capture feature learning but does capture how a model utilizes learnt features better than last/mid layer activations. 3 The Effect of Nonconvexity In this section, we investigate the poor performance of FedAvg [53] and SCAFFOLD [39] empirically in the setting of deep neural networks, focusing on image classification with a ResNet-18. To construct our federated learning setup, we split the CIFAR-10 dataset in a highly heterogeneous manner among ten clients. We either assign each client two classes (denoted by #C=2) or distribute samples according to a Dirichlet distribution with ↵ = 0.1 (denoted by ↵=0.1). For more details, see Section 5.1. Insufficiency of gradient correction methods. Current theoretical work [e.g., 39, 61, 1, 73] attributes the slowdown from data heterogeneity to the individual clients having varying local optima. If no single model is simultaneously optimal for all clients, then the updates of different clients can compete with and distort each other, leading to slow convergence. This tension is captured by the variance of the updates across the clients [client gradient heterogeneity, see 72]. Gradient correction methods such as SCAFFOLD [39] and FedDyn [1] explicitly correct for this and are provably unaffected by gradient heterogeneity for both convex and nonconvex losses. These theoretical predictions are aligned with the results of Figure 1(a), where the loss landscape is convex: SCAFFOLD is relatively unaffected by the level of heterogeneity and consistently outperforms FedAvg. In particular, performance is largely dictated by the algorithm and not the data distributions. This shows that client gradient heterogeneity captures the difficulty of the problem well. On the other hand, when training a ResNet-18 model with nonconvex loss landscape, Figure 1(b) shows that both FedAvg and SCAFFOLD suffer from data heterogeneity. This is despite the theory of gradient correction applying to both convex and nonconvex losses. Further, the train and test accuracies in Figure 1(b) match quite closely, suggesting that the failure lies in optimization (not fitting the training data) rather than generalization. Thus, while the current theory makes no qualitative distinctions between convex and nonconvex convergence, the practical behavior of algorithms in these settings is very different. Such differences between theoretical predictions and practical reality suggests that black-box notions such as gradient heterogeneity are insufficient for capturing the difficulty of training deep models. Ease of feature learning. We now dive into how a ResNet-18 trained with FedAvg (56.9% accuracy) differs from the centralized baseline (91.9% accuracy). We first apply linear probing to the FedAvg model (i.e., retraining with all but the output layer frozen). Note that this is equivalent to (convex) logistic regression over the last-layer activations. This simple procedure produces a striking jump from 56.9% to 77.9% accuracy. Thus, of the 35% gap in accuracy between the FedAvg and centralized models, 21% may be attributed to a failure to optimize the linear output layer. We next extend this experiment towards probing the information content of other layers. Given a FedAvg-trained model, we can use centralized training to retrain only the last ` layers while keeping the rest of the (7 `) layers (or ResNet blocks) frozen. We can also perform this procedure starting from a randomly initialized model. The performance difference between these two models can be attributed to the information content of the frozen (7 `) layers of the FedAvg model. Table 1 summarizes the results of this experiment. The large difference in accuracy (up to 42.6%) indicates the initial layers of the FedAvg model have learned useful features. There continues to be a gap between the FedAvg features and random features in the earlier layers as well,1 meaning that all layers of the FedAvg model learn useful features. We conjecture this is because from the perspective of earlier layers which perform simple edge detection, the tasks are independent of labels and the clients are i.i.d. However, the higher layers are more specialized and the effect of the heterogeneity is stronger. 4 Method Based on the observations in Section 3, we propose train-convexify-train (TCT) as a method for overcoming data heterogeneity when training deep models in a federated setting. Our high-level 1The significant decrease in the gap as we go down the layers may be because of the skip connections in the lower ResNet blocks which allow the random frozen layers to be sidestepped. This underestimates the true utility and information content in the earlier FedAvg layers. intuition is that we want to leverage both the features learned from applying FedAvg to neural networks and the effectiveness of convex federated optimization. More specifically, we perform several rounds of “bootstrap” FedAvg to learn features before solving a convexified version of the original optimization problem. 4.1 Computing the Empirical Neural Tangent Kernel To sidestep the challenges presented by nonconvexity, we describe how we approximate a neural network by its “linearization.” Given a neural network f( · ; ✓0) with weights ✓0 2 RP mapping inputs x 2 RD to RC , we replace it by its empirical neural tangent kernel (eNTK) approximation at ✓0 given by f(x; ✓) ⇡ f(x; ✓0) + (✓ ✓0)> @ @✓ f(x; ✓0), at each x 2 RD. Under this approximation, f(x; ✓) is a linear function of the “feature vector” (f(x; ✓0), @ @✓f(x; ✓0)) and the original nonconvex optimization problem becomes (convex) linear regression with respect to these features.2 Leveraging NTK for solving federated optimization problems has also been studied in previous work [29, 82]. To reduce the computational burden of working with the eNTK approximation, we make two further approximations: First, we randomly reinitialize the last layer of ✓0 and only consider @@✓f(x; ✓0) with respect to a single output logit. Over the randomness of this reinitialization, E[f(x; ✓0)] = 0. Moreover, given the random reinitialization, all the output logits of f(x; ✓0) are symmetric. These observations mean each data point x can be represented by a P -dimensional feature vector @ @✓f1(x; ✓0), where f1( · ; ✓0) refers to the first output logit. Then, we apply a dimensionality reduction by subsampling p random coordinates from this P -dimensional featurization.3 In our setting, this sub-sampling has the added benefit of reducing the number of bits communicated per round. In summary, we transform our original (nonconvex) optimization problem over a neural network initialized at ✓0 into a convex optimization problem in three steps: (i) reinitialize the last layer of ✓0; (ii) for each data point x, compute the gradient eNTK(x; ✓0) := @@✓f1(x; ✓0); (iii) subsample the coordinates of eNTK(x; ✓0) for each x to obtain a reduced-dimensionality eNTK representation. Let S : RP ! Rp denote this subsampling operation. Finally, we solve the resulting linear regression problem over these eNTK representations.4 4.2 Convexifying Federated Learning via eNTK Representations The eNTK approximation lets us convexify the neural net optimization problem: following Section 4.1, we may extract (from a model trained with FedAvg) eNTK representations of inputs from each client. It remains to fit an overparameterized linear model using these eNTK features in a federated manner. For ease of presentation, we denote the subsampled eNTK representation of input x by z 2 Rp, where p is the eNTK feature dimension after subsampling. We use zki to represent the eNTK feature of the i-th sample from the k-th client. Then, for K the number of clients, Y ki the one-hot encoded labels, nk the number of data points of the k-th client, n := P k2[K] nk the number of data points across all clients, and pk := nk/n, we can approximate the nonconvex neural net optimization problem by the convex linear regression problem min W L(W ) := KX k=1 pk · Lk(W ), where Lk(W ) := 1 nk nkX i=1 kW>zki Y ki k22. (1) To obtain the eNTK representation z of an input x, we take ✓0 in Section 4.1 to be the weights of a model trained with FedAvg. As we will show in Section 5, the convex reformulation in Eq. (1) significantly reduces the number of communication rounds needed to find an optimal solution. 2For classification problems, we one-hot encoded labels and fit a linear model using squared loss. 3That such representations empirically have low effective dimension due to fast eigenvalue decay [see, e.g., 75] means that such a random projection approximately preserves the geometry of the data points [5, 83]. For all of our experiments, we set p = 100, 000. 4Given a fitted linear model with weights W 2 Rp⇥C , the prediction at x is argmaxj [W>S( eNTK(x))]j . 4.3 Train-Convexify-Train (TCT) We now present our algorithm train-convexify-train (TCT), with convexification done via the neural tangent kernel, for federated optimization. TCT — train-convexify-train with eNTK representations • Stage 1: Extract eNTK features from a FedAvg-trained model. FedAvg is first used to train the model for T1 communication rounds. Let ✓T1 denote the model weights after these T1 rounds. Then, each client locally computes subsampled eNTK features, i.e., zki = S( eNTK(xki ; ✓T1)) for k 2 [K] and i 2 [nk]. • Stage 2: Decentralized linear regression with gradient correction. Given samples {(zki , Y ki )} nk i=1 on each client k, first normalize the eNTK inputs of all clients with a single communication round.a Then, solve the linear regression problem defined in Eq. (1) by SCAFFOLD with local learning rate ⌘ and local steps M .b aFor every feature in the eNTK representation, subtract the mean and scale to unit variance. bThe detailed description of SCAFFOLD for solving linear regression problems can be found in Algorithm 1, Appendix A. It has the same communication and computation cost as FedAvg. To motivate TCT, recall that in Section 3 we found that FedAvg learns “useful” features despite its poor performance, especially in the earlier layers. By taking an eNTK approximation, TCT optimizes a convex approximation while using information from all layers of the model. Empirically, we find that these extracted eNTK features significantly reduce the number of communication rounds needed to learn a performant model, even with data heterogeneity. 5 Experiments We now study the performance of TCT for the decentralized training of deep neural networks in the presence of data heterogeneity. We compare TCT to state-of-the-art federated learning algorithms on three benchmark tasks in federated learning. For each task, we apply these algorithms on client data distributions with varying degrees of data heterogeneity. We find that our proposed approach significantly outperforms existing algorithms when clients have highly heterogeneous data across all tasks. For additional experimental results and implementation details, see Appendix B. Our code is available at https://github.com/yaodongyu/TCT. 5.1 Experimental Setup Datasets and degrees of data heterogeneity. We assess the performance of federated learning algorithms on the image classification tasks FMNIST [80], CIFAR10, and CIFAR100 [41]. FMNIST and CIFAR10 each consist of 10 classes, while CIFAR100 includes images from 100 classes. There are 60,000 training images in FMNIST, and 50,000 training images in CIFAR10/100. To vary the degree of data heterogeneity, we follow the setup of Li et al. [45]. We consider two types of non-i.i.d. data distribution: (i) Data heterogeneity sampled from a symmetric Dirichlet distribution with parameter ↵ [49, 71]. That is, we sample pc ⇠ DirK(↵) from a K-dimensional symmetric Dirichlet distribution and assign a pkc -fraction of the class c samples to client k. (Smaller ↵ corresponds to more heterogeneity.) (ii) Clients get samples from a fixed subset of classes [53]. That is, each client is allocated a subset of classes; then, the samples of each class are split into non-overlapping subsets and assigned to clients that were allocated this class. We use #C to denote the number of classes allocated to each client. For example, #C=2 means each client has samples from 2 classes. To allow for consistent comparisons, all of our experiments are run with 10 clients. Models. For FMNIST, we use a convolutional neural network with ReLU activations consisting of two convolutional layers with max pooling followed by two fully connected layers (SimpleCNN). For CIFAR10 and CIFAR100, we mainly consider an 18-layer residual network [25] with 4 basic residual blocks (ResNet-18). In Appendix B.2, we present experimental results for other architectures. Algorithms and training schemes. We compare TCT to state-of-the-art federated learning algorithms, focusing on the widely-used algorithms FedAvg [53], FedProx [48], and SCAFFOLD [39]. (For comparisons to additional algorithms, see Appendix B.1.) Each client uses SGD with weight decay 10 5 and batch size 64 by default. For each baseline method, we run it for 200 total communication rounds using 5 local training epochs with local learning rate selected from {0.1, 0.01, 0.001} by grid search. For TCT, we run 100 rounds of FedAvg in Stage 1 following the above and use 100 communication rounds in Stage 2 with M = 500 local steps and local learning rate ⌘ = 5 · 10 5. 5.2 Main Results Table 2 displays the top-1 accuracy of all algorithm on the three tasks with varying degrees of data heterogeneity. We evaluated each algorithms on each task under four degrees of data heterogeneity. Smaller #C and ↵ in Table 2 correspond to higher heterogeneity. We find that the existing federated algorithms all suffer when data heterogeneity is high across all three tasks. For example, the top-1 accuracy of FedAvg on CIFAR-10 is 56.86% when #C=2, which is much worse than the 90.43% achieved in a more homogeneous setting (e.g. ↵ = 0.5). In contrast, TCT achieves consistently strong performance, even in the face of high data heterogeneity. More specifically, TCT achieves the best top-1 accuracy performance across all settings except CIFAR-100 with ↵ = 0.5, where TCT does only slightly worse than SCAFFOLD. In absolute terms, we find that TCT is not affected much by data heterogeneity, with performance dropping by less than 1.5% on CIFAR100 as ↵ goes from 0.5 to 0.001. Moreover, our algorithm improves over existing methods by at least 15% in the challenging cases, including FMNIST with #C=1, CIFAR-10 with #C=1 and #C=2, and CIFAR-100 with ↵ = 0.01 and ↵ = 0.001. And, perhaps surprisingly, our algorithm still performs relatively well in the extreme non-i.i.d. setting where each client sees only a single class. Figure 2 compares the performances of FedAvg, SCAFFOLD, and TCT in more detail on CIFAR100 dataset with different degrees of data heterogeneity. We consider the Dirichlet distribution with parameter ↵ 2 {0.1, 0.01, 0.001} and compare the training and test accuracy of these three algorithms. As shown in Figures 2(a) and 2(b), both FedAvg and SCAFFOLD struggle when data heterogeneity is high: for both algorithms, test accuracy drops significantly when ↵ decreases. In contrast, we see from Figure 2(c) that TCT maintains almost the same test accuracy for different ↵. Furthermore, the same set of default parameters for our algorithm, including local learning rate and the number of local steps, is relatively robust to different levels of data heterogeneity. 5.3 Communication Efficiency To understand the effectiveness of the local steps in our algorithm, we compare SCAFFOLD (used in TCT-Stage 2) to full batch gradient descent (GD) applied to the overparameterized linear regression problem in Stage 2 of TCT on these datasets. For our algorithm, we set local steps M 2 {102, 103} and use the default local learning rate. For full batch GD, we vary the learning rate from 10 5 to 10 1 and visualize the ones that do not diverge. The results are summarized in Figure 3. Each dotted line with square markers in Figure 3 corresponds to full batch GD with some learning rate. Across all three datasets, our proposed algorithm consistently outperforms full batch GD. Meanwhile, we find that more local steps for our algorithms lead to faster convergence across all settings. In particular, our algorithm converges within 20 communication rounds on CIFAR100 (as shown in Figure 3(c)). These results suggest that our proposed algorithm can largely leverage the local computation and improve communication efficiency. 5.4 Ablations Gradient correction. We investigate the role of gradient correction when solving overparameterized linear regression with eNTK features in TCT. We compare SCAFFOLD (used in TCT) to FedAvg on solving the regression problems and summarize the results in Figure 4. We use the default local learning rate and consider three different numbers of local steps for both algorithms, i.e., M 2 {10, 100, 1000}. As shown in Figure 4, our approach largely outperforms FedAvg when the number of local steps is large (M 100) across three datasets. We also find that the performance of FedAvg can even degrade when the number of local steps increases. For example, FedAvg with M = 1000 performs the worst across all three datasets. In contrast to FedAvg, SCAFFOLD converges faster when the number of local steps increases. These observations highlight the importance of gradient correction in our algorithm. Model weights for computing eNTK features. To understand the impact of the model weights trained in Stage 1 of TCT, we evaluate TCT run with different T1 parameters. We consider T1 2 {0, 20, 40, 60, 80, 100}, where T1 = 0 corresponds to randomly initialized weights. From Figure 5(a), we find that weights after FedAvg training are much more effective than weights at random initialization. Specifically, without FedAvg training, the eNTK (at random initialization) performs worse than standard FedAvg. In contrast, TCT significantly outperforms FedAvg by a large margin (roughly 20% in test accuracy) when eNTK features are extracted from a FedAvg-trained model. Also, we find that TCT is stable with respect to the choice of communication rounds T1 in Stage 1. For example, models trained by TCT with T1 60 achieve similar performance. Effect of normalization. In Figure 5(b), we investigate the role of normalization on TCT by comparing TCT run with normalized and unnormalized eNTK features. The same number of local steps (M = 500) is applied for both settings. We tune the learning rate ⌘ for each setting and plot the run that performs best (as measured in training accuracy). The results in Figure 5(b) suggest that the normalization step in TCT significantly improves the communication efficiency by increasing convergence speed. In particular, TCT with normalization converges to nearly 100% training accuracy in approximately 40 communication rounds, which is much faster than TCT without normalization. Pre-training vs. Bootstrapping. In Appendix B.4, we explore the effect of starting from a pretrained model instead of relying on bootstrapping to learn the features. We find that pre-training further improves the performance of TCT and completely erases the gap between centralized and federated learning. Additionally, we conduct experiments on investigating the role of training loss function and subsampling approximation in TCT-Stage 2. For TCT-Stage 2, we find that neither using the cross-entropy loss as the training objective nor applying full eNTK representations significantly improves the performance of TCT. On the other hand, applying subsampling approximation in TCT-Stage 2 can largely improve the communication efficiency compared to the full eNTK representations approach. See Appendix B.7 for detailed experimental results. 6 Conclusion We have argued that nonconvexity poses a significant challenge for federated learning algorithms. We found that a neural network trained in such a manner does learn useful features, but fails to use them and thus has poor overall accuracy. To sidestep this issue, we proposed a train-convexify-train procedure: first, train the neural network using FedAvg; then, optimize (using SCAFFOLD) a convex approximation of the model obtained using its empirical neural tangent kernel. We showed that the first stage extracts meaningful features, whereas the second stage learns to utilize these features to obtain a highly performant model. The resulting algorithm is significantly faster and more stable to hyper-parameters than previous federated learning methods. Finally, we also showed that given a good pre-pretrained feature extractor, our convexify-train procedure fully closes the gap between centralized and federated learning. Our algorithm adds to the growing body of work using eNTK to linearize neural networks and obtain tractable convex approximations. However, unlike most of these past works which only work with pre-trained models, our bootstrapping allows training models from scratch. Finally, we stress that the success of our approach underscores the need to revisit theoretical understanding of heterogeneous federated learning. Nonconvexity seems to play an outsized role but its effect in FL has hitherto been unexplored. In particular, black-box notions of difficulty such as gradient dissimilarity or distances between client optima seem insufficient to capture practical performance. It is likely that further progress in the field (e.g. federated pre-training of foundational models), will require tackling the issue of nonconvexity head on. Acknowledgments and Disclosure of Funding We would like to thank the anonymous reviewers for their constructive suggestions and comments. Yaodong Yu acknowledges support from the joint Simons Foundation-NSF DMS grant #2031899. Alexander Wei acknowledges support from an NSF Graduate Research Fellowship under grant DGE2146752. Sai Praneeth Karimireddy acknowledges support of an SNSF postdoc mobility fellowship. Yi Ma acknowledges support from ONR grants N00014-20-1-2002 and N00014-22-12102 and the joint Simons Foundation-NSF DMS grant #2031899. Michael Jordan acknowledges support of the ONR Mathematical Data Science program.
1. What are the main contributions and strengths of the paper regarding its empirical observations and novel federated learning method? 2. What are the weaknesses and limitations of the proposed approach, particularly in comparison to other baseline methods and personalized FL methods? 3. How does the reviewer suggest improving the paper, such as providing more intuition or empirical evidence for certain design choices and comparing against additional baselines? 4. Are there any minor notes or suggestions for improvement in the review, such as including related works, providing more explanation for certain aspects of the method, or ensuring fair comparisons with other methods?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper makes two main contributions: 1) empirically observing that the early layers of neural networks trained by FedAvg learn useful features even in heterogeneous data settings, while the performance degradation due to data heterogeneity is due to ineffective later layers, and 2) based on this observation, proposing a novel two-stage cross-silo federated learning method, BooNTK, that first uses FedAvg to learn early-layer features, applies a particular transformation based on these features to all data points, then learns a linear classifier on top of the transformed data using SCAFFOLD. To verify that FedAvg learns useful early-layer features, FedAvg is run on a heterogeneous image dataset, then the last ℓ -many layers are retrained in a centralized manner while the earlier layers are held fixed. It is observed that using the FedAvg-pretrained early layers leads to huge performance improvement over using random weights for the early layers when the last layer is retrained centrally. Also, there is some improvement due to using the FedAvg-pretrained vs random weights when the last ℓ -many layers are retrained centrally for all ℓ , suggesting that the early layers learned by FedAvg are extracting useful information. This observation inspires the first stage of the proposed method: FedAvg pretraining to learn the early layer weights. The paper also observes that SCAFFOLD is robust to data heterogeneity when the client losses are convex (but is not robust when the losses are nonconvex). This inspires the second stage of the proposed method: applying SCAFFOLD to convex losses to learn the last linear layer of the network. In particular, the second stage approach consists of computing the neural tangent kernel (NTK) representation of each data point using the pretrained weights in the NTK computation. Then, SCAFFOLD is employed to solve a multi-output linear regression with the transformed data points as inputs and the one-hot encoded labels as the target vectors. Empirical results are provided showing large improvement in training and testing accuracy of the proposed method over FedAvg, FedProx and SCAFFOLD on CIFAR100, FEMNIST, and MNIST with very heterogeneous partitions. Strengths And Weaknesses Strengths The paper makes interesting empirical observations that are relevant to NeurIPS and may inspire future work. In particular the observation that FedAvg learns useful features even in data heterogeneous settings is novel and helps to explain the empirical success of FedAvg plus fine-tuning. These observations motivate a new federated learning method with promising empirical performance. The experimental evaluation is mostly thorough, with multiple datasets tested and helpful ablations. The writing is clear. Weaknesses Not enough intuition or empirical evidence is provided for why the second stage of the algorithm should consist of linear regression on the NTK-transformed data points rather than simply fixing the first L-1 layers and running lear regression to learn the paramters of the last layer with MSE loss, or learn it with multi-class logistic regression and cross entropy loss as is conventional. Both of the latter approaches maintain convexity of the loss functions, are much simpler to implement, and involve an optimization over far fewer parameters (presumably the output of the last layer has dimension far less than p=100,000 as in the NTK approach). These approaches should be compared against as baselines and intuition should be provided as to why they are not used. On a similar note, results on the computational cost of the proposed method should be included. It seems to be much larger than the baselines due to mapping every data point to a high dimension via parameter derivative computation. Personalized FL has been shown to be an effective alternative to learning a single global model in data heterogeneous settings. As such, some personalized FL methods should also be compared against. Also, the experimental results would be strengthened by comparison against more recent FL methods that learn a single global model, e.g. [23,63]. Comparing with only 3 baselines is low for an empirical paper. Minor notes Missing related works: Huang et al., 2021 and Yue et al., 2021 employ the NTK for FL. Intuition on why it makes sense to run linear regression for classification problems rather than logistic regression would be helpful. Since BootNTK gets to pretrain using FedAvg for T_1 rounds, fair comparison with the other methods should allow them to train for an extra T_1 rounds. -Cross-device FL may still have data heterogeneity SCAFFOLD has higher communication and computational cost than fedavg by constant factor due to computing and communicating the gradient correction terms (footnote b). “By taking an eNTK approximation, BooNTK optimizes a convex approximation while using information from all layers of the model.” - besides the last layer Huang et al., FL-NTK: A Neural Tangent Kernel-based Framework for Federated Learning Convergence Analysis, https://arxiv.org/pdf/2105.05001.pdf, 2021. Yue et al., Neural Tangent Kernel Empowered Federated Learning, https://arxiv.org/pdf/2110.03681.pdf, 2021. Questions Why is the proposed approach limited to the cross-silo setting, with a small number of clients? It seems to me that it should also work for settings with many clients. Or, if there are many clients and fewer samples per client, does locally optimizing the high-dimensional linear regression diverge? Limitations The authors should discuss the computational complexity of the proposed method.
NIPS
Title Noise-Contrastive Estimation for Multivariate Point Processes Abstract The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. N/A The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. 1 Introduction Maximum likelihood estimation (MLE) is a popular training method for generative models. However, to obtain the likelihood of a generative model given the observed data, one must compute the probability of each observed sample, which often includes an expensive normalizing constant. For example, in a language model, each word is typically drawn from a softmax distribution over a large vocabulary, whose normalizing constant requires a summation over the vocabulary. This paper aims to alleviate a similar computational cost for multivariate point processes. These generative models are natural tools to analyze streams of discrete events in continuous time. Their likelihood is improved not only by raising the probability of the observed events, but by lowering the probabilities of the events that were observed not to occur. There are infinitely many times at which no event of any type occurred; to predict these non-occurrences, the likelihood must integrate the infinitesimal event probability for each event type over the entire observed time interval. Therefore, the likelihood is expensive to compute, particularly when there are many possible event types. As an alternative to MLE, we propose to train the model by learning to discriminate the observed events from events sampled from a noise process. Our method is a version of noise-contrastive estimation (NCE), which was originally developed for unnormalized (energy-based) distributions and then extended to conditional softmax distributions such as language models. To our best knowledge, we are the first to extend the method and its theoretical guarantees (for optimality, consistency and efficiency) to the context of multivariate point processes. We will also discuss similar efforts in related areas in section 4. On several datasets, our method shows compelling results. By evaluating fewer event intensities, training takes much less wall-clock time while still achieving competitive log-likelihood. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Preliminaries 2.1 Event Streams and Multivariate Point Processes Given a fixed time interval [0, T ), we may observe an event stream x[0,T ): at each continuous time t, the observation xt is one of the discrete types {∅, 1, . . . ,K} where ∅ means no event. An non-∅ observation is called an event. A generative model of an event stream is called a multivariate point process.∗ We wish to fit an autoregressive probability model to observed event streams. In a discrete-time autoregressive model, events would be generated from left to right, where xt is drawn from a distribution that depends on x0, . . . , xt−1. The continuous-time version still generates events from left to right,1 but at any specific time t we have p(xt = ∅) = 1, with only an infinitesimal probability of any event. (For a computationally practical sampling method, see section 3.1.) The model is a stochastic process defined by functions λk that determine a finite intensity λk(t | x[0,t)) ≥ 0 for each event type k 6= ∅ at each time t > 0. This intensity depends on the history of events x[0,t) that were drawn at times < t. It quantifies the instantaneous rate at time t of events of type k. That is, λk(t | x[0,t)) is the limit as dt→+ 0 of 1dt times the expected number of events of type k on the interval [t, t+ dt), where the expectation is conditioned on the history. As the event probabilities are infinitesimal, the times of the events are almost surely distinct. To ensure that we have a point process, the intensity functions must be chosen such that the total number of events on any bounded interval is almost surely finite. Models of this form include inhomogeneous Poisson processes (Daley & Vere-Jones, 2007), in which the intensity functions ignore the history, as well as (non-explosive) Hawkes processes (Hawkes, 1971) and their modern neural versions (Du et al., 2016; Mei & Eisner, 2017). Most models use intensity functions that are continuous between events. Our analysis requires only Assumption 1 (Continuity). For any event stream x[0,T ) and event type k ∈ {1, . . . ,K}, λk(t | x[0,t)) is Riemann integrable, i.e., bounded and continuous almost everywhere w.r.t. time t. 2.2 Maximum Likelihood Estimation: Usefulness and Difficulties In practice, we parameterize the intensity functions by θ. We write pθ for the resulting probability density over event streams. When learning θ from data, we make the conventional assumption that the true point process p∗ actually falls into the chosen model family: Assumption 2 (Existence). There exists at least one parameter vector θ∗ such that pθ∗ = p∗. Then as proved in Appendix A, such a θ∗ can be found as an argmax of JLL(θ) def = Ex[0,T )∼p∗ [ log pθ(x[0,T )) ] (1) Given assumption 1, the θ values that maximize JLL(θ) are exactly the set Θ∗ of values for which pθ = p ∗: any θ for which pθ 6= p∗ would end up with a strictly smaller JLL(θ) by increasing the cross entropy −p∗ log pθ over some interval (t, t′) for a set of histories with non-zero measure. If we modify equation (1) to take the expectation under the empirical distribution of event streams x[0,T ) in the training dataset, then JLL(θ) is proportional to the log-likelihood of θ. For any x[0,T ) that satisfies the condition in assumption 1, the log-density used in equation (1) can be expressed in terms of λk(t | x[0,t)): log pθ(x[0,T )) = ∑ t:xt 6=∅ log λxt(t | x[0,t))− ∫ T t=0 K∑ k=1 λk(t | x[0,t))dt (2) Notice that the second term lacks a log. It is expensive to compute in the following cases: • The total number of event types K is large, making ∑K k=1 slow. • The integral ∫ T t=0 is slow to estimate well, e.g., via a Monte Carlo estimate TJ ∑J j=1 ∑K k=1 λk(tj) where each tj is randomly sampled from the uniform distribution over [0, T ). • The chosen model architecture makes it hard to parallelize the λk(tj) computation over j and k. ∗This paper uses endnotes instead of footnotes. They are found at the start of the supplementary material. 2.3 Noise-Contrastive Estimation in Discrete Time For autoregressive models of discrete-time sequences, a similar computational inefficiency can be tackled by applying the principle of noise-contrastive estimation (Gutmann & Hyvärinen, 2010), as follows. For each history x0:t def = x0x1 . . . xt−1 in training data, NCE trains the model pθ to discriminate the actually observed datum xt from some noise samples whose distribution q is known. The intuition is: optimal performance is obtained if and only if pθ matches the true distribution p∗. More precisely, given a bag {x0t , x1t , . . . , xMt }, where exactly one element of the bag was drawn from p∗ and the rest drawn i.i.d. from q, consider the log-posterior probability (via Bayes’ Theorem2) that x0t was the one drawn from p ∗: log p∗(x0t |x0:t) ∏M m=1 q(x m t |x0:t)∑M m=0 p ∗(xmt |x0:t) ∏ m′ 6=m q(x m′ t |x0:t) (3) The “ranking” variant of NCE (Jozefowicz et al., 2016) substitutes pθ for p∗ in this expression, and seeks θ (e.g., by stochastic gradient ascent) to maximize the expectation of the resulting quantity when x0t is a random observation in training data, 3 x0:t is its history, and x1t , . . . , x M t are drawn i.i.d. from q(· | x0:t). This objective is really just conditional maximum log-likelihood on a supervised dataset of (M+1)way classification problems. Each problem presents an unordered set of M + 1 samples—one drawn from p∗ and the others drawn i.i.d. from q. The task is to guess which sample was drawn from p∗. Conditional MLE trains θ to maximize (in expectation) the log-probability that the model assigns to the correct answer. In the infinite-data limit, it will find θ (if possible) such that these logprobabilities match the true ones given by (3). For that, it is sufficient for θ to be such that pθ = p∗. Given assumption 2, Ma & Collins (2018) show that pθ = p∗ is also necessary, i.e., the NCE task is sufficient to find the true parameters. Although the NCE objective does not learn to predict the full observed sample xt as MLE does, but only to distinguish it from the M noise samples, their theorem implies that in expectation over all possible sets of M noise samples, it actually retains all the information (provided that M > 0 and q has support everywhere that p∗ does). This NCE objective is computationally cheaper than MLE when the distribution pθ(· | x0:t) is a softmax distribution over {1, . . . ,K} with large K. The reason is that the expensive normalizing constants in the numerator and denominator of equation (3) need not be computed. They cancel out because all the probabilities are conditioned on the same (actually observed) history. 3 Applying Noise-Contrastive Estimation in Continuous Time The expensive ∫ ∑ term in equation (2) is rather similar to a normalizing constant,4 as it sums over non-occurring events. We might try to avoid computing it5 by discretizing the time interval [0, T ) into finitely many intervals of width ∆ and applying NCE. In this case, we would be distinguishing the true sequence of events on an interval [i∆, (i + 1)∆) from corresponding noise sequences on the same interval, given the same (actually observed) history x[0,i∆). Unfortunately, the distribution pθ(· | x[0,i∆)) in the objective still involves an ∫ ∑ term where the integral is over [i∆, (i + 1)∆) and the inner sum is over k. The solution is to shrink the intervals to infinitesimal width dt. Then our log-posterior over each of them becomes log pθ(x 0 [t,t+dt) | x 0 [0,t)) ∏M m=1 q(x 0 [t,t+dt) | x 0 [0,t))∑M m=0 pθ(x m [t,t+dt) | x 0 [0,t)) ∏ m′ 6=m q(x m′ [t,t+dt) | x 0 [0,t)) (4) We will define the noise distribution q in terms of finite intensity functions λqk, like the ones λk that define pθ. As a result, at a given time t, there is only an infinitesimal probability that any of {x0t , x1t , . . . , xMt } is an event. Nonetheless, at each time t ∈ [0, T ), we will consider generating a noise event (for each m > 0) conditioned on the actually observed history x[0,t). Among these uncountably many times t, we may have some for which x0t 6= ∅ (the observed events), or where xmt 6= ∅ for some 1 ≤ m ≤M (the noise events). Almost surely, the set of times t with a real or noise event remains finite. Our NCE objective is the expected sum of equation (4) over all such times t in an event stream, when the stream is drawn uniformly from the set of streams in the training dataset—as in section 6—and the noise events are then drawn as above. Our objective ignores all other times t, as they provide no information about θ. After all, when x0t = · · · = xMt = ∅, the probability that x0t is the one drawn from the true model must be 1/(M + 1) by symmetry, regardless of θ. At these times, the ratio in equation (4) does reduce to 1/(M + 1), since all probabilities are 1. At the times t that we do consider, how do we compute equation (4)? Almost surely, exactly one of x0t , . . . , x M t is an event k for some k 6= ∅. As a result, exactly one factor in each product is infinitesimal (dt times the λk or λ q k intensity), and the other factors are 1. Thus, the dt factors cancel out between numerator and denominator, and equation (4) simplifies to log λk(t|x0[0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = k and log λqk(t|x 0 [0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = ∅ (5) When a gradient-based optimization method adjusts θ to increase equation (5), the intuition is as follows. If x0t = k, the model intensity λk(t) is increased to explain why an event of type k occurred at this particular time t. If x0t = ∅, the model intensity λk(t) is decreased to explain why an event of type k did not actually occur at time t (it was merely a noise event xmt = k, for some m 6= 0). These cases achieve the same qualitative effects as following the gradients of the first and second terms, respectively, in the log-likelihood (2). Our full objective is an expectation of the sum of finitely many such log-ratios:6 JNC(θ) def = Ex0 [0,T ) ∼p∗,x1:M [0,T ) ∼q ∑ t:x0t 6=∅ log λ x0t (t|x0[0,t)) λ x0t (t|x0 [0,t) ) + M∑ m=1 ∑ t:xmt 6=∅ log λq xmt (t|x0[0,t)) λxmt (t|x0 [0,t) ) (6) where λk(t | x0[0,t)) def = λk(t | x0[0,t)) + Mλ q k(t | x0[0,t)). The expectation is estimated by sampling: we draw an observed stream x0[0,T ) from the training dataset, then draw noise events x 1:M [0,T ) from q conditioned on the prefixes (histories) given by this observed stream, as explained in the next section. Given these samples, the bracketed term is easy to compute (and we then use backprop to get its gradient w.r.t. θ, which is a stochastic gradient of the objective (6)). It eliminates the ∫ ∑ of equation (2) as desired, replacing it with a sum over the noise events. For each real or noise event, we compute only two intensities—the true and noise intensities of that event type at that time. 3.1 Efficient Sampling of Noise Events The thinning algorithm (Lewis & Shedler, 1979; Liniger, 2009) is a rejection sampling method for drawing an event stream over a given observation interval [0, T ) from a continuous-time autoregressive process. Suppose we have already drawn the first i − 1 times, namely t1, . . . , ti−1. For every future time t ≥ ti−1, let H(t) denote the context x[0,t) consisting only of the events at those times, and define λ(t | H(t)) def= ∑K k=1 λk(t | H(t)). If λ(t | H(t)) were constant at λ, we could draw the next event time as ti ∼ ti−1 + Exp(λ). We would then set xt = ∅ for all of the intermediate times t ∈ (ti−1, ti), and finally draw the type xti of the event at time ti, choosing k with probability λk(ti | H(t)) / λ. But what if λ(t | H(t)) is not constant? The thinning algorithm still runs the foregoing method, taking λ to be any upper bound: λ ≥ λ(t | H(t)) for all t ≥ ti−1. In this case, there may be “leftover” probability mass not allocated to any k. This mass is allocated to ∅. A draw of xti = ∅ means there was no event at time ti after all (corresponding to a rejected proposal). Either way, we now continue on to draw ti+1 and xti+1 , using a version ofH(t) that has been updated to include the event or non-event xti . The update toH(t) affects λ(t | H(t)) and the choice of λ. How to sample noise streams. To draw a stream xm[0,t) of noise events, we run the thinning algorithm, using the noise intensity functions λqk. However, there is a modification: H(t) is now defined to be x0[0,t)—the history from the observed event stream, rather than the previously sampled noise events—and is updated accordingly. This is because in equation (6), at each time t, all of {x0t , x1t , . . . , xMt } are conditioned on x0[0,t) (akin to the discrete-time case). 7 The full pseudocode is given in Algorithm 1 in the supplementary material. Coarse-to-fine sampling of event types. Although our NCE method has eliminated the need to integrate over t, the thinning algorithm above still sums over k in the definition of λq(t | H(t)). For large K, this sum is expensive if we take the noise distribution on each training minibatch to be, for example, the pθ with the current value of θ. That is a statistically efficient choice of noise distribution, but we can make a more computationally efficient choice. A simple scheme is to first generate each noise event with a coarse-grained type c ∈ {1, . . . , C}, and then stochastically choose a refinement k ∈ {1, . . . ,K}: λqk(t | x 0 [0,t)) def = C∑ c=1 q(k | c)λqc(t | x0[0,t)) for k = 1, 2, . . . ,K (7) This noise model is parameterized by the functions λqcand the probabilities q(k | c). The total intensity is now λq(t | H(t)) = ∑C c=1 λ q c(t), so we now need to examine onlyC intensity functions, not K, to choose λ in the thinning algorithm. If we partition the K types into C coarse-grained clusters (e.g., using domain knowledge), then evaluating the noise probability (7) within the training objective (6) is also fast because there is only one non-zero summand c in equation (7). This simple scheme works well in our experiments. However, it could be elaborated by replacing q(k | c) with q(k | c, x0[0,t)), by partitioning the event vocabulary automatically, by allowing overlapping clusters, or by using multiple levels of refinement: all of these elaborations are used by the fast hierarchical language model of Mnih & Hinton (2009). How to draw M streams. An efficient way to draw the union of M i.i.d. noise streams is to run the thinning algorithm once, with all intensities multiplied by M . In other words, the expected number of noise events on any interval is multiplied by M . This scheme does not tell us which specific noise stream m generated a particular noise event, but the NCE objective (6) does not need to know that. The scheme works only because every noise stream m has the same intensities λqk(t | x0[0,t)) (not λqk(t | xm[0,t))) at time t: there is no dependence on the previous events from that stream. Amusingly, NCE can now run even with non-integer M . Fractional objective. One view of the thinning algorithm is that it accepts the proposed time ti with probability µ = λ(ti)/λ, and in that case, labels it as k with probability λk(ti)/λ(ti). To get a greater diversity of noise samples, we can accept the time with probability 1, if we then scale its term in the objective (6) by µ. This does not change the expectation (6) but may reduce the sampling variance in estimating it. Note that increasing the upper bound λ now has an effect similar to increasing M : more noise samples.8 3.2 Computational Cost Analysis State-of-the-art intensity models use neural networks whose state summarizes the history and is updated after each event. So to train on a single event stream x with I ≥ 0 events, both MLE and NCE must perform I updates to the neural state. Both MLE and NCE then evaluate the intensities λk(t | x[0,t)) of these I events, and also the intensities of a number of events that did not occur, which almost surely fall at other times.9 Consider the number of intensities evaluated. For MLE, assume the Monte Carlo integration technique mentioned in section 2.2. MLE computes the intensity λ for I observed events and for all K possible events at each of J sampled times. We take J = ρI (with randomized rounding to an integer), where ρ > 0 is a hyperparameter (Mei & Eisner, 2017). Hence, the expected total number of intensity evaluations is I + ρIK. For NCE with the coarse-to-fine strategy, let J be the total number of times proposed by the thinning algorithm. Observe that E [I] = ∫ T 0 λ∗(t | x[0,t))dt, and E [J ] = M · ∫ T 0 λ(t | x[0,t))dt. Thus, E [J ] ≈ M · E [I] if (1) λ at any time is a tight upper bound on the noise event rate λq at that time and (2) the average noise event rate well-approximates the average observed event rate (which should become true very early in training). To label or reject each of the J proposals, NCE evaluates C noise intensities λqc ; if the proposal is accepted with label k (perhaps fractionally), it must also evaluate its model intensity λk. The noise and model intensities λqc and λk must also be evaluated for the I observed events. Hence, the total number of intensity evaluations is at most (C+1)J+2I , which ≈ (C + 1)MI + 2I in expectation. Dividing by I , we see that making (M + 1)(C + 1) ≤ ρK suffices to make NCE’s stochastic objective take less work per observed stream than MLE’s stochastic objective. M = 1 and C = 1 is a valid choice. But NCE’s objective is less informed for smaller M , so its stochastic gradient carries less information about θ∗. In section 5, we empirically investigate the effect of M and C on NCE and compare to MLE with different ρ. 3.3 Theoretical Guarantees: Optimality, Consistency and Efficiency The following theorem implies that stochastic gradient ascent on NCE converges to a correct θ (if one exists): Theorem 1 (Optimality). Under assumptions 1 and 2, θ ∈ argmaxθ JNC(θ) if and only if pθ = p∗. This theorem falls out naturally when we rearrange the NCE objective in equation (6) as∫ T t=0 ∑ x0 [0,t) p∗(x0[0,t)) K∑ k=1 λ∗k(t | x0[0,t)) ( λ∗k(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λk(t|x0[0,t)) λk(t|x0[0,t)) +M λqk(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λqk(t|x 0 [0,t)) λk(t|x0[0,t)) ) ︸ ︷︷ ︸ a negative cross entropy dt where λ∗k is the intensity under p ∗ and λ∗k is defined analogously to λk: see full derivation in Appendix B.1. Obviously, pθ = p∗ is sufficient to maximize the negative cross-entropy for any k given any history and thus maximize JNC(θ). It turns out to be also necessary because any θ for which pθ 6= p∗ would, given assumption 1, end up decreasing the negative cross-entropy for some k over some interval (t, t′) given a set of histories with non-zero measure. A full proof can be found in Appendix B.2: as we’ll see there, although it resembles Theorem 3.2 of Ma & Collins (2018), the proof of our Theorem 1 requires new analysis to handle continuous time, since Ma & Collins (2018) only worked on discrete-time sequential data. Moreover, our NCE method is strongly consistent for any M ≥ 1 and approaches Fisher efficiency when M is large. These properties are the same as in Ma & Collins (2018) and the proofs are also similar. Therefore, we leave the related theorems together with their assumptions and proofs to Appendices B.3 and B.4. 4 Related Work The original “binary classification” NCE principle was proposed by Gutmann & Hyvärinen (2010) to estimate parameters for joint models of the form pθ(x) ∝ exp(score(x, θ)). Gutmann & Hyvärinen (2012) applied it to natural image statistics. It was then widely applied to natural language processing problems such as language modeling (Mnih & Teh, 2012), learning word representations (Mikolov et al., 2013) and machine translation (Vaswani et al., 2013). The “ranking-based” variant (Jozefowicz et al., 2016)10 is better suited for conditional distributions (Ma & Collins, 2018), including those used in autoregressive models, and has shown strong performance in large-scale language modeling with recurrent neural networks. Guo et al. (2018) tried NCE on (univariate) point processes but used the binary classification version. They used discrimination problems of the form: “Is event k at time t′ the true next event following history x[0,t], or was it generated from a noise distribution?” Their classification-based NCE variant is not well-suited to conditional distributions (Ma & Collins, 2018): this complicates their method since they needed to build a parametric model of the local normalizing constant, giving them weaker theoretical guarantees and worse performance (see section 5). In contrast, we choose the rankingbased variant: our key idea of how to apply this to continuous time is new (see section 3) and requires new analysis (see Appendices A and B). 5 Experiments We evaluate our NCE method on several synthetic and real-world datasets, with comparison to MLE, Guo et al. (2018) (denoted as b-NCE), and least-squares estimation (LSE) (Eichler et al., 2017). bNCE has the same hyper-parameter M as our NCE, namely the number of noise events. LSE’s objective involves an integral over times [0, T ), so it has the same hyper-parameter ρ as MLE. On each of the datasets, we will show the estimated log-likelihood on the held-out data achieved by the models trained on the NCE, b-NCE, MLE and LSE objectives, as training consumes increasing amounts of computation—measured by the number of intensity evaluations and the elapsed wallclock time (in seconds).11 We always set the minibatch size B to exhaust the GPU capacity, so smaller ρ or M allows larger B. Larger B in turn increases the number of epochs per unit time (but decreases the possibly beneficial variance in the stochastic gradient updates). 5.1 Synthetic Datasets In this section, we work on two synthetic datasets with K = 10000 event types. We choose the neural Hawkes process (NHP) (Mei & Eisner, 2017) to be our model pθ.12 For the noise distribution q, we choose C = 1 and also parametrize its intensity function as a neural Hawkes process. The first dataset has sequences drawn from the randomly initialized q such that we can check how well our NCE method could perform with the “ground-truth” noise distribution q = p∗; the sequences of the second dataset were drawn from a randomly initialized neural Hawkes process to evaluate both methods in the case that the model family pθ is well-specified. We show (the zoomedin views of the interesting parts of) multiple learning curves on each dataset in Figure 1: NCE is observed to consume substantially fewer intensity evaluations and less wall-clock time than MLE to achieve competitive log-likelihood, while b-NCE and LSE are slower and only converge to lower log-likelihood. Note that the wall-clock time may not be proportional to the number of intensities because computing intensities is not all of the work (e.g., there are LSTM states of both pθ and q to compute and store on GPU). We also observed that models that achieved comparable log-likelihood—no matter how they were trained—achieved comparable prediction accuracies (measured by root-mean-square-error for time and error rate for type). Therefore, our NCE still beats other methods at converging quickly to the highest prediction accuracy. Ablation Study I: Always or Never Redraw Noise Samples. During training, for each observed data, we can choose to either redraw a new set of noise samples every time we train on it or keep reusing the old samples: we did the latter for Figure 1. In experiments doing the former, we observed better generation for tiny M (e.g., M = 1) but substantial slow-down (because of sampling) with no improved generalization for large M (e.g, 1000). Such results suggest that we always reuse old samples as long as M is reasonably large: it is then what we do for all other experiments throughout the paper. See Appendix D.4 for more details of this ablation study, including learning curves of the “always redraw” strategy in Figure 5. 5.2 Real-World Social Interaction Datasets with Large K We also evaluate the methods on several real-world social interaction datasets that have many event types: see Appendix D.1 for details (e.g, data statistics, pre-processing, data splits, etc). In this section, we show the learning curves on two particularly interesting datasets (explained below) in Figure 2 and leave those on the other datasets (which look similar) to Appendix D.3. EuroEmail (Paranjape et al., 2017). This dataset contains time-stamped emails between anonymized members of a European research institute. We work on a subset of 100 most active members and then end up with K = 10000 possible event types and 50000 training event tokens. BitcoinOTC (Kumar et al., 2016). This dataset contains time-stamped rating (positive/negative) records between anonymized users on the BitcoinOTC trading platform. We work on a subset of 100 most active users and then end up with K = 19800 (self-rating not allowed) possible event types but only 1000 training event tokens: this is an extremely data-sparse setting. On these datasets, our model pθ is still a neural Hawkes process. For the noise distribution q, we experiment with not only the coarse-to-fine neural process withC = 1 but also a homogeneous Poisson process. As shown in Figure 2, our NCE tends to perform better with the neural q: this is because a neural model can better fit the data and thus provide better training signals, analogous to how a good generator can benefit the discriminator in the generative adversarial framework (Goodfellow et al., 2014). NCE with Poisson q also shows benefits through the early and middle training stages, but it might suffer larger variance (e.g., Figure 2a2) and end up with slightly worse generalization (e.g., Figure 2b2). MLE with different ρ values all eventually achieve the highest log-likelihood (≈ −10 on EuroEmail and ≈ −15 on BitcoinOTC), but most of these runs are so slow that their peaks are out of the current views. The b-NCE runs with different M values are slower, achieve worse generalization and suffer larger variance than our NCE; interestingly, b-NCE prefers Poisson q to neural q (better generalization on EuroEmail and smaller variance on BitcoinOTC). In general, LSE is the slowest, and the highest log-likelihood it can achieve (≈ −30 on EuroEmail and ≈ −25 on BitcoinOTC) is lower than that of MLE and our NCE. Ablation Study II: Trained vs. Untrained q. The noise distributions (except the ground-truth q for Synthetic-1) that we have used so far were all pretrained on the same data as we train pθ. The training cost is cheap: e.g., on the datasets in this section, the actual wall-clock training time for the neural q is less than 2% of what is needed to train pθ, and training the Poisson q costs even less.1314 We also experimented with untrained noise distributions and they were observed to perform worse (e.g., worse generalization, slower convergence and larger variance). See Appendix D.5 for more details, including learning curves (Figure 6). 5.3 Real-World Dataset with Dynamic Facts In this section, we let pθ be a neural Datalog through time (NDTT) model (Mei et al., 2020). Such a model can be used in a domain in which new events dynamically update the set of event types and the structure of their intensity functions. We evaluate our method on training the domain-specific models presented by Mei et al. (2020), on the same datasets they used: RoboCup (Chen & Mooney, 2008). This dataset logs actions of robot players during RoboCup soccer games. The set of possible event types dynamically changes over time (e.g., only ball possessor can kick or pass) as the ball is frequently transferred between players (by passing or stealing). There are K = 528 event types over all time, but only about 20 of them are possible at any given time. IPTV (Xu et al., 2018). This dataset contains time-stamped records of 1000 users watching 49 TV programs over 2012. The users are not able to watch a program until it is released, so the number of event types grows from K = 0 to K = 49000 as programs are released one after another. The learning curves are displayed in Figure 3. On RoboCup, NCE only progresses faster than MLE at the early to middle training stages: M = 5 and M = 10 eventually achieved the highest loglikelihood at the same time as MLE andM = 1 ended up with worse generalization. On IPTV, NCE with M = 1 turned out to learn as well as and much faster than MLE. The dynamic architecture makes it hard to parallelize the intensity computation; MLE in particular performs poorly in wallclock time, and we needed a remarkably small ρ to let MLE finish within the shown time range. On both datasets, b-NCE and LSE drastically underperform MLE and NCE: their learning curves increase so slowly and achieve such poor generalization that only b-NCE with M = 5 and M = 10 are visible on the graphs. Ablation Study III: Effect of C. In the above figures, we used the coarse-to-fine neural model as q. On RoboCup, each action (kick, pass, etc.) has a coarse-grained intensity, so C = 5. On IPTV, we partition the event vocabulary by TV program, so C = 49. We also experimented with C = 1: this reduces the number of intensities computed during sampling on both datasets, but has (slightly) worse generalization on RoboCup (since q becomes less expressive). See Appendix D.6 for more details, including learning curves (Figure 7). 6 Conclusion We have introduced a novel instantiation of the general NCE principle for training a multivariate point process model. Our objective has the same optimal parameters as the log-likelihood objective (if the model is well-specified), but needs fewer expensive function evaluations and much less wallclock time in practice. This benefit is demonstrated on several synthetic and real-world datasets. Moreover, our method is provably consistent and efficient under mild assumptions. Broader Impact Our method is designed to train a multivariate point process for probabilistic modeling of event streams. By describing this method and releasing code, we hope to facilitate probabilistic modeling of continuous-time sequential data in many domains. Good probabilistic models make it possible to impute missing events, anticipate possible future events, and react accordingly. They can also be used in exploratory data analysis. In addition to making it more feasible and more convenient for domain experts to train complex models with many event types, our method reduces the energy cost necessary to do so. Examples of event streams with potential social impact include a person’s detailed food/exercise/sleep/medical event log, their social media interactions, their interactions with educational exercises or games, or their educational or workplace events (for time management and career planning); a customer’s interactions with a particular company or its website or other user interface; a company’s sales and purchases; geopolitical events, financial events, human activity modeling, music modeling, and dynamic resource requests. We are not aware of any negative broader impacts that might stem from publishing this work. Disclosure of Funding Sources This work was supported by a Ph.D. Fellowship Award to the first author by Bloomberg L.P. and a National Science Foundation Grant No. 1718846 to the last author, as well as two Titan X Pascal GPUs donated by NVIDIA Corporation and compute cycles from the Maryland Advanced Research Computing Center. Acknowledgments We thank the anonymous NeurIPS reviewers and meta-reviewer as well as Hongteng Xu for helpful comments on this paper.
1. What is the focus and contribution of the paper regarding multivariate point processes? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical guarantees? 3. What are the weaknesses of the paper, especially regarding its assumptions and experimental settings? 4. Do you have any concerns about the applicability of the proposed method to real-world data sets? 5. How do you assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a version of noise-contrastive estimation (NCE) method to alleviate computational cost for multivariate point processes and provides its theoretical guarantees. The authors evaluate their work on both synthetic and real-world datasets and show that their method achieve comparable results with much less computational time compared with baselines. However, the assumptions shown in the theoretical part seems to mismatch with the experimental results. Strengths Applying NCE to make the learning of point process scalable is a very good idea. Moreover, the authors provide theoretical support on the rationality of the proposed learning strategy, which improves the solidness of the proposed method. The proof seems correct. Weaknesses The main concern is the experimental part. Although the training/testing likelihood is reasonable for evaluating the convergence and the performance of the proposed method, I would like to see more comparisons on predictive tasks in real-world data sets. Additionally, the assumption 1 in the paper may be questionable in some situations. The continuity is a strong assumption on the intensity function, which will lead the proposed theoretical work to be inapplicable to many widely-used point processes, e.g., Hawkes process and self-correcting process, whose intensities are not continuous. Because the authors apply some complicated point process models, e.g., neural Hawkes process, and achieve encouraging performance, this assumption may be redundant or can be relaxed. In particular, I wonder if the assumption of Riemann integrable can be replaced with Lebesgue integration? Overall, I think it is nice work, but the conflict on the assumption and the experimental settings prevents me from accepting this work directly. Minors: The font size of texts in figures should be enlarged. The information of the last reference (Xu et al. 2018) is wrong. It was published at IJCAI.
NIPS
Title Noise-Contrastive Estimation for Multivariate Point Processes Abstract The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. N/A The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. 1 Introduction Maximum likelihood estimation (MLE) is a popular training method for generative models. However, to obtain the likelihood of a generative model given the observed data, one must compute the probability of each observed sample, which often includes an expensive normalizing constant. For example, in a language model, each word is typically drawn from a softmax distribution over a large vocabulary, whose normalizing constant requires a summation over the vocabulary. This paper aims to alleviate a similar computational cost for multivariate point processes. These generative models are natural tools to analyze streams of discrete events in continuous time. Their likelihood is improved not only by raising the probability of the observed events, but by lowering the probabilities of the events that were observed not to occur. There are infinitely many times at which no event of any type occurred; to predict these non-occurrences, the likelihood must integrate the infinitesimal event probability for each event type over the entire observed time interval. Therefore, the likelihood is expensive to compute, particularly when there are many possible event types. As an alternative to MLE, we propose to train the model by learning to discriminate the observed events from events sampled from a noise process. Our method is a version of noise-contrastive estimation (NCE), which was originally developed for unnormalized (energy-based) distributions and then extended to conditional softmax distributions such as language models. To our best knowledge, we are the first to extend the method and its theoretical guarantees (for optimality, consistency and efficiency) to the context of multivariate point processes. We will also discuss similar efforts in related areas in section 4. On several datasets, our method shows compelling results. By evaluating fewer event intensities, training takes much less wall-clock time while still achieving competitive log-likelihood. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Preliminaries 2.1 Event Streams and Multivariate Point Processes Given a fixed time interval [0, T ), we may observe an event stream x[0,T ): at each continuous time t, the observation xt is one of the discrete types {∅, 1, . . . ,K} where ∅ means no event. An non-∅ observation is called an event. A generative model of an event stream is called a multivariate point process.∗ We wish to fit an autoregressive probability model to observed event streams. In a discrete-time autoregressive model, events would be generated from left to right, where xt is drawn from a distribution that depends on x0, . . . , xt−1. The continuous-time version still generates events from left to right,1 but at any specific time t we have p(xt = ∅) = 1, with only an infinitesimal probability of any event. (For a computationally practical sampling method, see section 3.1.) The model is a stochastic process defined by functions λk that determine a finite intensity λk(t | x[0,t)) ≥ 0 for each event type k 6= ∅ at each time t > 0. This intensity depends on the history of events x[0,t) that were drawn at times < t. It quantifies the instantaneous rate at time t of events of type k. That is, λk(t | x[0,t)) is the limit as dt→+ 0 of 1dt times the expected number of events of type k on the interval [t, t+ dt), where the expectation is conditioned on the history. As the event probabilities are infinitesimal, the times of the events are almost surely distinct. To ensure that we have a point process, the intensity functions must be chosen such that the total number of events on any bounded interval is almost surely finite. Models of this form include inhomogeneous Poisson processes (Daley & Vere-Jones, 2007), in which the intensity functions ignore the history, as well as (non-explosive) Hawkes processes (Hawkes, 1971) and their modern neural versions (Du et al., 2016; Mei & Eisner, 2017). Most models use intensity functions that are continuous between events. Our analysis requires only Assumption 1 (Continuity). For any event stream x[0,T ) and event type k ∈ {1, . . . ,K}, λk(t | x[0,t)) is Riemann integrable, i.e., bounded and continuous almost everywhere w.r.t. time t. 2.2 Maximum Likelihood Estimation: Usefulness and Difficulties In practice, we parameterize the intensity functions by θ. We write pθ for the resulting probability density over event streams. When learning θ from data, we make the conventional assumption that the true point process p∗ actually falls into the chosen model family: Assumption 2 (Existence). There exists at least one parameter vector θ∗ such that pθ∗ = p∗. Then as proved in Appendix A, such a θ∗ can be found as an argmax of JLL(θ) def = Ex[0,T )∼p∗ [ log pθ(x[0,T )) ] (1) Given assumption 1, the θ values that maximize JLL(θ) are exactly the set Θ∗ of values for which pθ = p ∗: any θ for which pθ 6= p∗ would end up with a strictly smaller JLL(θ) by increasing the cross entropy −p∗ log pθ over some interval (t, t′) for a set of histories with non-zero measure. If we modify equation (1) to take the expectation under the empirical distribution of event streams x[0,T ) in the training dataset, then JLL(θ) is proportional to the log-likelihood of θ. For any x[0,T ) that satisfies the condition in assumption 1, the log-density used in equation (1) can be expressed in terms of λk(t | x[0,t)): log pθ(x[0,T )) = ∑ t:xt 6=∅ log λxt(t | x[0,t))− ∫ T t=0 K∑ k=1 λk(t | x[0,t))dt (2) Notice that the second term lacks a log. It is expensive to compute in the following cases: • The total number of event types K is large, making ∑K k=1 slow. • The integral ∫ T t=0 is slow to estimate well, e.g., via a Monte Carlo estimate TJ ∑J j=1 ∑K k=1 λk(tj) where each tj is randomly sampled from the uniform distribution over [0, T ). • The chosen model architecture makes it hard to parallelize the λk(tj) computation over j and k. ∗This paper uses endnotes instead of footnotes. They are found at the start of the supplementary material. 2.3 Noise-Contrastive Estimation in Discrete Time For autoregressive models of discrete-time sequences, a similar computational inefficiency can be tackled by applying the principle of noise-contrastive estimation (Gutmann & Hyvärinen, 2010), as follows. For each history x0:t def = x0x1 . . . xt−1 in training data, NCE trains the model pθ to discriminate the actually observed datum xt from some noise samples whose distribution q is known. The intuition is: optimal performance is obtained if and only if pθ matches the true distribution p∗. More precisely, given a bag {x0t , x1t , . . . , xMt }, where exactly one element of the bag was drawn from p∗ and the rest drawn i.i.d. from q, consider the log-posterior probability (via Bayes’ Theorem2) that x0t was the one drawn from p ∗: log p∗(x0t |x0:t) ∏M m=1 q(x m t |x0:t)∑M m=0 p ∗(xmt |x0:t) ∏ m′ 6=m q(x m′ t |x0:t) (3) The “ranking” variant of NCE (Jozefowicz et al., 2016) substitutes pθ for p∗ in this expression, and seeks θ (e.g., by stochastic gradient ascent) to maximize the expectation of the resulting quantity when x0t is a random observation in training data, 3 x0:t is its history, and x1t , . . . , x M t are drawn i.i.d. from q(· | x0:t). This objective is really just conditional maximum log-likelihood on a supervised dataset of (M+1)way classification problems. Each problem presents an unordered set of M + 1 samples—one drawn from p∗ and the others drawn i.i.d. from q. The task is to guess which sample was drawn from p∗. Conditional MLE trains θ to maximize (in expectation) the log-probability that the model assigns to the correct answer. In the infinite-data limit, it will find θ (if possible) such that these logprobabilities match the true ones given by (3). For that, it is sufficient for θ to be such that pθ = p∗. Given assumption 2, Ma & Collins (2018) show that pθ = p∗ is also necessary, i.e., the NCE task is sufficient to find the true parameters. Although the NCE objective does not learn to predict the full observed sample xt as MLE does, but only to distinguish it from the M noise samples, their theorem implies that in expectation over all possible sets of M noise samples, it actually retains all the information (provided that M > 0 and q has support everywhere that p∗ does). This NCE objective is computationally cheaper than MLE when the distribution pθ(· | x0:t) is a softmax distribution over {1, . . . ,K} with large K. The reason is that the expensive normalizing constants in the numerator and denominator of equation (3) need not be computed. They cancel out because all the probabilities are conditioned on the same (actually observed) history. 3 Applying Noise-Contrastive Estimation in Continuous Time The expensive ∫ ∑ term in equation (2) is rather similar to a normalizing constant,4 as it sums over non-occurring events. We might try to avoid computing it5 by discretizing the time interval [0, T ) into finitely many intervals of width ∆ and applying NCE. In this case, we would be distinguishing the true sequence of events on an interval [i∆, (i + 1)∆) from corresponding noise sequences on the same interval, given the same (actually observed) history x[0,i∆). Unfortunately, the distribution pθ(· | x[0,i∆)) in the objective still involves an ∫ ∑ term where the integral is over [i∆, (i + 1)∆) and the inner sum is over k. The solution is to shrink the intervals to infinitesimal width dt. Then our log-posterior over each of them becomes log pθ(x 0 [t,t+dt) | x 0 [0,t)) ∏M m=1 q(x 0 [t,t+dt) | x 0 [0,t))∑M m=0 pθ(x m [t,t+dt) | x 0 [0,t)) ∏ m′ 6=m q(x m′ [t,t+dt) | x 0 [0,t)) (4) We will define the noise distribution q in terms of finite intensity functions λqk, like the ones λk that define pθ. As a result, at a given time t, there is only an infinitesimal probability that any of {x0t , x1t , . . . , xMt } is an event. Nonetheless, at each time t ∈ [0, T ), we will consider generating a noise event (for each m > 0) conditioned on the actually observed history x[0,t). Among these uncountably many times t, we may have some for which x0t 6= ∅ (the observed events), or where xmt 6= ∅ for some 1 ≤ m ≤M (the noise events). Almost surely, the set of times t with a real or noise event remains finite. Our NCE objective is the expected sum of equation (4) over all such times t in an event stream, when the stream is drawn uniformly from the set of streams in the training dataset—as in section 6—and the noise events are then drawn as above. Our objective ignores all other times t, as they provide no information about θ. After all, when x0t = · · · = xMt = ∅, the probability that x0t is the one drawn from the true model must be 1/(M + 1) by symmetry, regardless of θ. At these times, the ratio in equation (4) does reduce to 1/(M + 1), since all probabilities are 1. At the times t that we do consider, how do we compute equation (4)? Almost surely, exactly one of x0t , . . . , x M t is an event k for some k 6= ∅. As a result, exactly one factor in each product is infinitesimal (dt times the λk or λ q k intensity), and the other factors are 1. Thus, the dt factors cancel out between numerator and denominator, and equation (4) simplifies to log λk(t|x0[0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = k and log λqk(t|x 0 [0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = ∅ (5) When a gradient-based optimization method adjusts θ to increase equation (5), the intuition is as follows. If x0t = k, the model intensity λk(t) is increased to explain why an event of type k occurred at this particular time t. If x0t = ∅, the model intensity λk(t) is decreased to explain why an event of type k did not actually occur at time t (it was merely a noise event xmt = k, for some m 6= 0). These cases achieve the same qualitative effects as following the gradients of the first and second terms, respectively, in the log-likelihood (2). Our full objective is an expectation of the sum of finitely many such log-ratios:6 JNC(θ) def = Ex0 [0,T ) ∼p∗,x1:M [0,T ) ∼q ∑ t:x0t 6=∅ log λ x0t (t|x0[0,t)) λ x0t (t|x0 [0,t) ) + M∑ m=1 ∑ t:xmt 6=∅ log λq xmt (t|x0[0,t)) λxmt (t|x0 [0,t) ) (6) where λk(t | x0[0,t)) def = λk(t | x0[0,t)) + Mλ q k(t | x0[0,t)). The expectation is estimated by sampling: we draw an observed stream x0[0,T ) from the training dataset, then draw noise events x 1:M [0,T ) from q conditioned on the prefixes (histories) given by this observed stream, as explained in the next section. Given these samples, the bracketed term is easy to compute (and we then use backprop to get its gradient w.r.t. θ, which is a stochastic gradient of the objective (6)). It eliminates the ∫ ∑ of equation (2) as desired, replacing it with a sum over the noise events. For each real or noise event, we compute only two intensities—the true and noise intensities of that event type at that time. 3.1 Efficient Sampling of Noise Events The thinning algorithm (Lewis & Shedler, 1979; Liniger, 2009) is a rejection sampling method for drawing an event stream over a given observation interval [0, T ) from a continuous-time autoregressive process. Suppose we have already drawn the first i − 1 times, namely t1, . . . , ti−1. For every future time t ≥ ti−1, let H(t) denote the context x[0,t) consisting only of the events at those times, and define λ(t | H(t)) def= ∑K k=1 λk(t | H(t)). If λ(t | H(t)) were constant at λ, we could draw the next event time as ti ∼ ti−1 + Exp(λ). We would then set xt = ∅ for all of the intermediate times t ∈ (ti−1, ti), and finally draw the type xti of the event at time ti, choosing k with probability λk(ti | H(t)) / λ. But what if λ(t | H(t)) is not constant? The thinning algorithm still runs the foregoing method, taking λ to be any upper bound: λ ≥ λ(t | H(t)) for all t ≥ ti−1. In this case, there may be “leftover” probability mass not allocated to any k. This mass is allocated to ∅. A draw of xti = ∅ means there was no event at time ti after all (corresponding to a rejected proposal). Either way, we now continue on to draw ti+1 and xti+1 , using a version ofH(t) that has been updated to include the event or non-event xti . The update toH(t) affects λ(t | H(t)) and the choice of λ. How to sample noise streams. To draw a stream xm[0,t) of noise events, we run the thinning algorithm, using the noise intensity functions λqk. However, there is a modification: H(t) is now defined to be x0[0,t)—the history from the observed event stream, rather than the previously sampled noise events—and is updated accordingly. This is because in equation (6), at each time t, all of {x0t , x1t , . . . , xMt } are conditioned on x0[0,t) (akin to the discrete-time case). 7 The full pseudocode is given in Algorithm 1 in the supplementary material. Coarse-to-fine sampling of event types. Although our NCE method has eliminated the need to integrate over t, the thinning algorithm above still sums over k in the definition of λq(t | H(t)). For large K, this sum is expensive if we take the noise distribution on each training minibatch to be, for example, the pθ with the current value of θ. That is a statistically efficient choice of noise distribution, but we can make a more computationally efficient choice. A simple scheme is to first generate each noise event with a coarse-grained type c ∈ {1, . . . , C}, and then stochastically choose a refinement k ∈ {1, . . . ,K}: λqk(t | x 0 [0,t)) def = C∑ c=1 q(k | c)λqc(t | x0[0,t)) for k = 1, 2, . . . ,K (7) This noise model is parameterized by the functions λqcand the probabilities q(k | c). The total intensity is now λq(t | H(t)) = ∑C c=1 λ q c(t), so we now need to examine onlyC intensity functions, not K, to choose λ in the thinning algorithm. If we partition the K types into C coarse-grained clusters (e.g., using domain knowledge), then evaluating the noise probability (7) within the training objective (6) is also fast because there is only one non-zero summand c in equation (7). This simple scheme works well in our experiments. However, it could be elaborated by replacing q(k | c) with q(k | c, x0[0,t)), by partitioning the event vocabulary automatically, by allowing overlapping clusters, or by using multiple levels of refinement: all of these elaborations are used by the fast hierarchical language model of Mnih & Hinton (2009). How to draw M streams. An efficient way to draw the union of M i.i.d. noise streams is to run the thinning algorithm once, with all intensities multiplied by M . In other words, the expected number of noise events on any interval is multiplied by M . This scheme does not tell us which specific noise stream m generated a particular noise event, but the NCE objective (6) does not need to know that. The scheme works only because every noise stream m has the same intensities λqk(t | x0[0,t)) (not λqk(t | xm[0,t))) at time t: there is no dependence on the previous events from that stream. Amusingly, NCE can now run even with non-integer M . Fractional objective. One view of the thinning algorithm is that it accepts the proposed time ti with probability µ = λ(ti)/λ, and in that case, labels it as k with probability λk(ti)/λ(ti). To get a greater diversity of noise samples, we can accept the time with probability 1, if we then scale its term in the objective (6) by µ. This does not change the expectation (6) but may reduce the sampling variance in estimating it. Note that increasing the upper bound λ now has an effect similar to increasing M : more noise samples.8 3.2 Computational Cost Analysis State-of-the-art intensity models use neural networks whose state summarizes the history and is updated after each event. So to train on a single event stream x with I ≥ 0 events, both MLE and NCE must perform I updates to the neural state. Both MLE and NCE then evaluate the intensities λk(t | x[0,t)) of these I events, and also the intensities of a number of events that did not occur, which almost surely fall at other times.9 Consider the number of intensities evaluated. For MLE, assume the Monte Carlo integration technique mentioned in section 2.2. MLE computes the intensity λ for I observed events and for all K possible events at each of J sampled times. We take J = ρI (with randomized rounding to an integer), where ρ > 0 is a hyperparameter (Mei & Eisner, 2017). Hence, the expected total number of intensity evaluations is I + ρIK. For NCE with the coarse-to-fine strategy, let J be the total number of times proposed by the thinning algorithm. Observe that E [I] = ∫ T 0 λ∗(t | x[0,t))dt, and E [J ] = M · ∫ T 0 λ(t | x[0,t))dt. Thus, E [J ] ≈ M · E [I] if (1) λ at any time is a tight upper bound on the noise event rate λq at that time and (2) the average noise event rate well-approximates the average observed event rate (which should become true very early in training). To label or reject each of the J proposals, NCE evaluates C noise intensities λqc ; if the proposal is accepted with label k (perhaps fractionally), it must also evaluate its model intensity λk. The noise and model intensities λqc and λk must also be evaluated for the I observed events. Hence, the total number of intensity evaluations is at most (C+1)J+2I , which ≈ (C + 1)MI + 2I in expectation. Dividing by I , we see that making (M + 1)(C + 1) ≤ ρK suffices to make NCE’s stochastic objective take less work per observed stream than MLE’s stochastic objective. M = 1 and C = 1 is a valid choice. But NCE’s objective is less informed for smaller M , so its stochastic gradient carries less information about θ∗. In section 5, we empirically investigate the effect of M and C on NCE and compare to MLE with different ρ. 3.3 Theoretical Guarantees: Optimality, Consistency and Efficiency The following theorem implies that stochastic gradient ascent on NCE converges to a correct θ (if one exists): Theorem 1 (Optimality). Under assumptions 1 and 2, θ ∈ argmaxθ JNC(θ) if and only if pθ = p∗. This theorem falls out naturally when we rearrange the NCE objective in equation (6) as∫ T t=0 ∑ x0 [0,t) p∗(x0[0,t)) K∑ k=1 λ∗k(t | x0[0,t)) ( λ∗k(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λk(t|x0[0,t)) λk(t|x0[0,t)) +M λqk(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λqk(t|x 0 [0,t)) λk(t|x0[0,t)) ) ︸ ︷︷ ︸ a negative cross entropy dt where λ∗k is the intensity under p ∗ and λ∗k is defined analogously to λk: see full derivation in Appendix B.1. Obviously, pθ = p∗ is sufficient to maximize the negative cross-entropy for any k given any history and thus maximize JNC(θ). It turns out to be also necessary because any θ for which pθ 6= p∗ would, given assumption 1, end up decreasing the negative cross-entropy for some k over some interval (t, t′) given a set of histories with non-zero measure. A full proof can be found in Appendix B.2: as we’ll see there, although it resembles Theorem 3.2 of Ma & Collins (2018), the proof of our Theorem 1 requires new analysis to handle continuous time, since Ma & Collins (2018) only worked on discrete-time sequential data. Moreover, our NCE method is strongly consistent for any M ≥ 1 and approaches Fisher efficiency when M is large. These properties are the same as in Ma & Collins (2018) and the proofs are also similar. Therefore, we leave the related theorems together with their assumptions and proofs to Appendices B.3 and B.4. 4 Related Work The original “binary classification” NCE principle was proposed by Gutmann & Hyvärinen (2010) to estimate parameters for joint models of the form pθ(x) ∝ exp(score(x, θ)). Gutmann & Hyvärinen (2012) applied it to natural image statistics. It was then widely applied to natural language processing problems such as language modeling (Mnih & Teh, 2012), learning word representations (Mikolov et al., 2013) and machine translation (Vaswani et al., 2013). The “ranking-based” variant (Jozefowicz et al., 2016)10 is better suited for conditional distributions (Ma & Collins, 2018), including those used in autoregressive models, and has shown strong performance in large-scale language modeling with recurrent neural networks. Guo et al. (2018) tried NCE on (univariate) point processes but used the binary classification version. They used discrimination problems of the form: “Is event k at time t′ the true next event following history x[0,t], or was it generated from a noise distribution?” Their classification-based NCE variant is not well-suited to conditional distributions (Ma & Collins, 2018): this complicates their method since they needed to build a parametric model of the local normalizing constant, giving them weaker theoretical guarantees and worse performance (see section 5). In contrast, we choose the rankingbased variant: our key idea of how to apply this to continuous time is new (see section 3) and requires new analysis (see Appendices A and B). 5 Experiments We evaluate our NCE method on several synthetic and real-world datasets, with comparison to MLE, Guo et al. (2018) (denoted as b-NCE), and least-squares estimation (LSE) (Eichler et al., 2017). bNCE has the same hyper-parameter M as our NCE, namely the number of noise events. LSE’s objective involves an integral over times [0, T ), so it has the same hyper-parameter ρ as MLE. On each of the datasets, we will show the estimated log-likelihood on the held-out data achieved by the models trained on the NCE, b-NCE, MLE and LSE objectives, as training consumes increasing amounts of computation—measured by the number of intensity evaluations and the elapsed wallclock time (in seconds).11 We always set the minibatch size B to exhaust the GPU capacity, so smaller ρ or M allows larger B. Larger B in turn increases the number of epochs per unit time (but decreases the possibly beneficial variance in the stochastic gradient updates). 5.1 Synthetic Datasets In this section, we work on two synthetic datasets with K = 10000 event types. We choose the neural Hawkes process (NHP) (Mei & Eisner, 2017) to be our model pθ.12 For the noise distribution q, we choose C = 1 and also parametrize its intensity function as a neural Hawkes process. The first dataset has sequences drawn from the randomly initialized q such that we can check how well our NCE method could perform with the “ground-truth” noise distribution q = p∗; the sequences of the second dataset were drawn from a randomly initialized neural Hawkes process to evaluate both methods in the case that the model family pθ is well-specified. We show (the zoomedin views of the interesting parts of) multiple learning curves on each dataset in Figure 1: NCE is observed to consume substantially fewer intensity evaluations and less wall-clock time than MLE to achieve competitive log-likelihood, while b-NCE and LSE are slower and only converge to lower log-likelihood. Note that the wall-clock time may not be proportional to the number of intensities because computing intensities is not all of the work (e.g., there are LSTM states of both pθ and q to compute and store on GPU). We also observed that models that achieved comparable log-likelihood—no matter how they were trained—achieved comparable prediction accuracies (measured by root-mean-square-error for time and error rate for type). Therefore, our NCE still beats other methods at converging quickly to the highest prediction accuracy. Ablation Study I: Always or Never Redraw Noise Samples. During training, for each observed data, we can choose to either redraw a new set of noise samples every time we train on it or keep reusing the old samples: we did the latter for Figure 1. In experiments doing the former, we observed better generation for tiny M (e.g., M = 1) but substantial slow-down (because of sampling) with no improved generalization for large M (e.g, 1000). Such results suggest that we always reuse old samples as long as M is reasonably large: it is then what we do for all other experiments throughout the paper. See Appendix D.4 for more details of this ablation study, including learning curves of the “always redraw” strategy in Figure 5. 5.2 Real-World Social Interaction Datasets with Large K We also evaluate the methods on several real-world social interaction datasets that have many event types: see Appendix D.1 for details (e.g, data statistics, pre-processing, data splits, etc). In this section, we show the learning curves on two particularly interesting datasets (explained below) in Figure 2 and leave those on the other datasets (which look similar) to Appendix D.3. EuroEmail (Paranjape et al., 2017). This dataset contains time-stamped emails between anonymized members of a European research institute. We work on a subset of 100 most active members and then end up with K = 10000 possible event types and 50000 training event tokens. BitcoinOTC (Kumar et al., 2016). This dataset contains time-stamped rating (positive/negative) records between anonymized users on the BitcoinOTC trading platform. We work on a subset of 100 most active users and then end up with K = 19800 (self-rating not allowed) possible event types but only 1000 training event tokens: this is an extremely data-sparse setting. On these datasets, our model pθ is still a neural Hawkes process. For the noise distribution q, we experiment with not only the coarse-to-fine neural process withC = 1 but also a homogeneous Poisson process. As shown in Figure 2, our NCE tends to perform better with the neural q: this is because a neural model can better fit the data and thus provide better training signals, analogous to how a good generator can benefit the discriminator in the generative adversarial framework (Goodfellow et al., 2014). NCE with Poisson q also shows benefits through the early and middle training stages, but it might suffer larger variance (e.g., Figure 2a2) and end up with slightly worse generalization (e.g., Figure 2b2). MLE with different ρ values all eventually achieve the highest log-likelihood (≈ −10 on EuroEmail and ≈ −15 on BitcoinOTC), but most of these runs are so slow that their peaks are out of the current views. The b-NCE runs with different M values are slower, achieve worse generalization and suffer larger variance than our NCE; interestingly, b-NCE prefers Poisson q to neural q (better generalization on EuroEmail and smaller variance on BitcoinOTC). In general, LSE is the slowest, and the highest log-likelihood it can achieve (≈ −30 on EuroEmail and ≈ −25 on BitcoinOTC) is lower than that of MLE and our NCE. Ablation Study II: Trained vs. Untrained q. The noise distributions (except the ground-truth q for Synthetic-1) that we have used so far were all pretrained on the same data as we train pθ. The training cost is cheap: e.g., on the datasets in this section, the actual wall-clock training time for the neural q is less than 2% of what is needed to train pθ, and training the Poisson q costs even less.1314 We also experimented with untrained noise distributions and they were observed to perform worse (e.g., worse generalization, slower convergence and larger variance). See Appendix D.5 for more details, including learning curves (Figure 6). 5.3 Real-World Dataset with Dynamic Facts In this section, we let pθ be a neural Datalog through time (NDTT) model (Mei et al., 2020). Such a model can be used in a domain in which new events dynamically update the set of event types and the structure of their intensity functions. We evaluate our method on training the domain-specific models presented by Mei et al. (2020), on the same datasets they used: RoboCup (Chen & Mooney, 2008). This dataset logs actions of robot players during RoboCup soccer games. The set of possible event types dynamically changes over time (e.g., only ball possessor can kick or pass) as the ball is frequently transferred between players (by passing or stealing). There are K = 528 event types over all time, but only about 20 of them are possible at any given time. IPTV (Xu et al., 2018). This dataset contains time-stamped records of 1000 users watching 49 TV programs over 2012. The users are not able to watch a program until it is released, so the number of event types grows from K = 0 to K = 49000 as programs are released one after another. The learning curves are displayed in Figure 3. On RoboCup, NCE only progresses faster than MLE at the early to middle training stages: M = 5 and M = 10 eventually achieved the highest loglikelihood at the same time as MLE andM = 1 ended up with worse generalization. On IPTV, NCE with M = 1 turned out to learn as well as and much faster than MLE. The dynamic architecture makes it hard to parallelize the intensity computation; MLE in particular performs poorly in wallclock time, and we needed a remarkably small ρ to let MLE finish within the shown time range. On both datasets, b-NCE and LSE drastically underperform MLE and NCE: their learning curves increase so slowly and achieve such poor generalization that only b-NCE with M = 5 and M = 10 are visible on the graphs. Ablation Study III: Effect of C. In the above figures, we used the coarse-to-fine neural model as q. On RoboCup, each action (kick, pass, etc.) has a coarse-grained intensity, so C = 5. On IPTV, we partition the event vocabulary by TV program, so C = 49. We also experimented with C = 1: this reduces the number of intensities computed during sampling on both datasets, but has (slightly) worse generalization on RoboCup (since q becomes less expressive). See Appendix D.6 for more details, including learning curves (Figure 7). 6 Conclusion We have introduced a novel instantiation of the general NCE principle for training a multivariate point process model. Our objective has the same optimal parameters as the log-likelihood objective (if the model is well-specified), but needs fewer expensive function evaluations and much less wallclock time in practice. This benefit is demonstrated on several synthetic and real-world datasets. Moreover, our method is provably consistent and efficient under mild assumptions. Broader Impact Our method is designed to train a multivariate point process for probabilistic modeling of event streams. By describing this method and releasing code, we hope to facilitate probabilistic modeling of continuous-time sequential data in many domains. Good probabilistic models make it possible to impute missing events, anticipate possible future events, and react accordingly. They can also be used in exploratory data analysis. In addition to making it more feasible and more convenient for domain experts to train complex models with many event types, our method reduces the energy cost necessary to do so. Examples of event streams with potential social impact include a person’s detailed food/exercise/sleep/medical event log, their social media interactions, their interactions with educational exercises or games, or their educational or workplace events (for time management and career planning); a customer’s interactions with a particular company or its website or other user interface; a company’s sales and purchases; geopolitical events, financial events, human activity modeling, music modeling, and dynamic resource requests. We are not aware of any negative broader impacts that might stem from publishing this work. Disclosure of Funding Sources This work was supported by a Ph.D. Fellowship Award to the first author by Bloomberg L.P. and a National Science Foundation Grant No. 1718846 to the last author, as well as two Titan X Pascal GPUs donated by NVIDIA Corporation and compute cycles from the Maryland Advanced Research Computing Center. Acknowledgments We thank the anonymous NeurIPS reviewers and meta-reviewer as well as Hongteng Xu for helpful comments on this paper.
1. What is the focus and contribution of the paper on multivariate point processes? 2. What are the strengths of the proposed approach, particularly in terms of efficiency? 3. What are the weaknesses of the paper, especially regarding its novelty and comparisons with other works? 4. Do you have any concerns about the theoretical properties and empirical evaluation of the proposed method? 5. How does the reviewer assess the clarity and quality of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a novel noise-contrastive estimation for multivariate point processes. The authors evaluate their method on both synthetic and real-world datasets, and show that the proposed method takes much less wall-clock time while still achieving competitive log-likelihood. Strengths The idea of applying NCE to point process looks interesting, even though it is not the first time to be proposed. The research question of finding an efficient estimator is relevant to the NeurIPS community. The paper is well-written and clear. Weaknesses The NCE estimator for MPP looks interesting. However, the paper suffers from a number of flaws that should be better addressed. 1. The paper proposes an NCE estimator for MPP. However, this is not the first attempt to apply NCE for point processes. The INITIATOR model (Guo et al., 2018) has already attempted to do so. I believe the extension from univariate point processes to multivariate ones should not be considered as a significant contribution. 2. Whether the advantages of NCE are applicable to point processes is a question. The main benefit of NCE is to reduce the computational cost of MLE. However, the proposed method involves a sampling procedure, which is usually time-consuming. The authors also fail to consider and compare with other existing estimators for point processes that are more efficient than MLE. For example, the least-square estimator, (which has been integrated into the python library “tick” for learning point processes) even has a closed-form solution for learning linear multivariate Hawkes processes. Further, the broader category of martingale estimator, which LSE falls in, also possesses the desired properties of consistency and asymptotic normality. These commonly-used methods should also be mentioned and discussed. 3. The theoretical properties seem to be inherited from the NCE, rather than being derived from the proposed incorporation. 4. The empirical evolution is week. The paper only involves one baseline (NHP) with MLE as the underlying ground truth. More baselines should be considered, such as parametric point processes (vanilla Hawkes processes), the recurrent marked point processes, etc.
NIPS
Title Noise-Contrastive Estimation for Multivariate Point Processes Abstract The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. N/A The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. 1 Introduction Maximum likelihood estimation (MLE) is a popular training method for generative models. However, to obtain the likelihood of a generative model given the observed data, one must compute the probability of each observed sample, which often includes an expensive normalizing constant. For example, in a language model, each word is typically drawn from a softmax distribution over a large vocabulary, whose normalizing constant requires a summation over the vocabulary. This paper aims to alleviate a similar computational cost for multivariate point processes. These generative models are natural tools to analyze streams of discrete events in continuous time. Their likelihood is improved not only by raising the probability of the observed events, but by lowering the probabilities of the events that were observed not to occur. There are infinitely many times at which no event of any type occurred; to predict these non-occurrences, the likelihood must integrate the infinitesimal event probability for each event type over the entire observed time interval. Therefore, the likelihood is expensive to compute, particularly when there are many possible event types. As an alternative to MLE, we propose to train the model by learning to discriminate the observed events from events sampled from a noise process. Our method is a version of noise-contrastive estimation (NCE), which was originally developed for unnormalized (energy-based) distributions and then extended to conditional softmax distributions such as language models. To our best knowledge, we are the first to extend the method and its theoretical guarantees (for optimality, consistency and efficiency) to the context of multivariate point processes. We will also discuss similar efforts in related areas in section 4. On several datasets, our method shows compelling results. By evaluating fewer event intensities, training takes much less wall-clock time while still achieving competitive log-likelihood. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Preliminaries 2.1 Event Streams and Multivariate Point Processes Given a fixed time interval [0, T ), we may observe an event stream x[0,T ): at each continuous time t, the observation xt is one of the discrete types {∅, 1, . . . ,K} where ∅ means no event. An non-∅ observation is called an event. A generative model of an event stream is called a multivariate point process.∗ We wish to fit an autoregressive probability model to observed event streams. In a discrete-time autoregressive model, events would be generated from left to right, where xt is drawn from a distribution that depends on x0, . . . , xt−1. The continuous-time version still generates events from left to right,1 but at any specific time t we have p(xt = ∅) = 1, with only an infinitesimal probability of any event. (For a computationally practical sampling method, see section 3.1.) The model is a stochastic process defined by functions λk that determine a finite intensity λk(t | x[0,t)) ≥ 0 for each event type k 6= ∅ at each time t > 0. This intensity depends on the history of events x[0,t) that were drawn at times < t. It quantifies the instantaneous rate at time t of events of type k. That is, λk(t | x[0,t)) is the limit as dt→+ 0 of 1dt times the expected number of events of type k on the interval [t, t+ dt), where the expectation is conditioned on the history. As the event probabilities are infinitesimal, the times of the events are almost surely distinct. To ensure that we have a point process, the intensity functions must be chosen such that the total number of events on any bounded interval is almost surely finite. Models of this form include inhomogeneous Poisson processes (Daley & Vere-Jones, 2007), in which the intensity functions ignore the history, as well as (non-explosive) Hawkes processes (Hawkes, 1971) and their modern neural versions (Du et al., 2016; Mei & Eisner, 2017). Most models use intensity functions that are continuous between events. Our analysis requires only Assumption 1 (Continuity). For any event stream x[0,T ) and event type k ∈ {1, . . . ,K}, λk(t | x[0,t)) is Riemann integrable, i.e., bounded and continuous almost everywhere w.r.t. time t. 2.2 Maximum Likelihood Estimation: Usefulness and Difficulties In practice, we parameterize the intensity functions by θ. We write pθ for the resulting probability density over event streams. When learning θ from data, we make the conventional assumption that the true point process p∗ actually falls into the chosen model family: Assumption 2 (Existence). There exists at least one parameter vector θ∗ such that pθ∗ = p∗. Then as proved in Appendix A, such a θ∗ can be found as an argmax of JLL(θ) def = Ex[0,T )∼p∗ [ log pθ(x[0,T )) ] (1) Given assumption 1, the θ values that maximize JLL(θ) are exactly the set Θ∗ of values for which pθ = p ∗: any θ for which pθ 6= p∗ would end up with a strictly smaller JLL(θ) by increasing the cross entropy −p∗ log pθ over some interval (t, t′) for a set of histories with non-zero measure. If we modify equation (1) to take the expectation under the empirical distribution of event streams x[0,T ) in the training dataset, then JLL(θ) is proportional to the log-likelihood of θ. For any x[0,T ) that satisfies the condition in assumption 1, the log-density used in equation (1) can be expressed in terms of λk(t | x[0,t)): log pθ(x[0,T )) = ∑ t:xt 6=∅ log λxt(t | x[0,t))− ∫ T t=0 K∑ k=1 λk(t | x[0,t))dt (2) Notice that the second term lacks a log. It is expensive to compute in the following cases: • The total number of event types K is large, making ∑K k=1 slow. • The integral ∫ T t=0 is slow to estimate well, e.g., via a Monte Carlo estimate TJ ∑J j=1 ∑K k=1 λk(tj) where each tj is randomly sampled from the uniform distribution over [0, T ). • The chosen model architecture makes it hard to parallelize the λk(tj) computation over j and k. ∗This paper uses endnotes instead of footnotes. They are found at the start of the supplementary material. 2.3 Noise-Contrastive Estimation in Discrete Time For autoregressive models of discrete-time sequences, a similar computational inefficiency can be tackled by applying the principle of noise-contrastive estimation (Gutmann & Hyvärinen, 2010), as follows. For each history x0:t def = x0x1 . . . xt−1 in training data, NCE trains the model pθ to discriminate the actually observed datum xt from some noise samples whose distribution q is known. The intuition is: optimal performance is obtained if and only if pθ matches the true distribution p∗. More precisely, given a bag {x0t , x1t , . . . , xMt }, where exactly one element of the bag was drawn from p∗ and the rest drawn i.i.d. from q, consider the log-posterior probability (via Bayes’ Theorem2) that x0t was the one drawn from p ∗: log p∗(x0t |x0:t) ∏M m=1 q(x m t |x0:t)∑M m=0 p ∗(xmt |x0:t) ∏ m′ 6=m q(x m′ t |x0:t) (3) The “ranking” variant of NCE (Jozefowicz et al., 2016) substitutes pθ for p∗ in this expression, and seeks θ (e.g., by stochastic gradient ascent) to maximize the expectation of the resulting quantity when x0t is a random observation in training data, 3 x0:t is its history, and x1t , . . . , x M t are drawn i.i.d. from q(· | x0:t). This objective is really just conditional maximum log-likelihood on a supervised dataset of (M+1)way classification problems. Each problem presents an unordered set of M + 1 samples—one drawn from p∗ and the others drawn i.i.d. from q. The task is to guess which sample was drawn from p∗. Conditional MLE trains θ to maximize (in expectation) the log-probability that the model assigns to the correct answer. In the infinite-data limit, it will find θ (if possible) such that these logprobabilities match the true ones given by (3). For that, it is sufficient for θ to be such that pθ = p∗. Given assumption 2, Ma & Collins (2018) show that pθ = p∗ is also necessary, i.e., the NCE task is sufficient to find the true parameters. Although the NCE objective does not learn to predict the full observed sample xt as MLE does, but only to distinguish it from the M noise samples, their theorem implies that in expectation over all possible sets of M noise samples, it actually retains all the information (provided that M > 0 and q has support everywhere that p∗ does). This NCE objective is computationally cheaper than MLE when the distribution pθ(· | x0:t) is a softmax distribution over {1, . . . ,K} with large K. The reason is that the expensive normalizing constants in the numerator and denominator of equation (3) need not be computed. They cancel out because all the probabilities are conditioned on the same (actually observed) history. 3 Applying Noise-Contrastive Estimation in Continuous Time The expensive ∫ ∑ term in equation (2) is rather similar to a normalizing constant,4 as it sums over non-occurring events. We might try to avoid computing it5 by discretizing the time interval [0, T ) into finitely many intervals of width ∆ and applying NCE. In this case, we would be distinguishing the true sequence of events on an interval [i∆, (i + 1)∆) from corresponding noise sequences on the same interval, given the same (actually observed) history x[0,i∆). Unfortunately, the distribution pθ(· | x[0,i∆)) in the objective still involves an ∫ ∑ term where the integral is over [i∆, (i + 1)∆) and the inner sum is over k. The solution is to shrink the intervals to infinitesimal width dt. Then our log-posterior over each of them becomes log pθ(x 0 [t,t+dt) | x 0 [0,t)) ∏M m=1 q(x 0 [t,t+dt) | x 0 [0,t))∑M m=0 pθ(x m [t,t+dt) | x 0 [0,t)) ∏ m′ 6=m q(x m′ [t,t+dt) | x 0 [0,t)) (4) We will define the noise distribution q in terms of finite intensity functions λqk, like the ones λk that define pθ. As a result, at a given time t, there is only an infinitesimal probability that any of {x0t , x1t , . . . , xMt } is an event. Nonetheless, at each time t ∈ [0, T ), we will consider generating a noise event (for each m > 0) conditioned on the actually observed history x[0,t). Among these uncountably many times t, we may have some for which x0t 6= ∅ (the observed events), or where xmt 6= ∅ for some 1 ≤ m ≤M (the noise events). Almost surely, the set of times t with a real or noise event remains finite. Our NCE objective is the expected sum of equation (4) over all such times t in an event stream, when the stream is drawn uniformly from the set of streams in the training dataset—as in section 6—and the noise events are then drawn as above. Our objective ignores all other times t, as they provide no information about θ. After all, when x0t = · · · = xMt = ∅, the probability that x0t is the one drawn from the true model must be 1/(M + 1) by symmetry, regardless of θ. At these times, the ratio in equation (4) does reduce to 1/(M + 1), since all probabilities are 1. At the times t that we do consider, how do we compute equation (4)? Almost surely, exactly one of x0t , . . . , x M t is an event k for some k 6= ∅. As a result, exactly one factor in each product is infinitesimal (dt times the λk or λ q k intensity), and the other factors are 1. Thus, the dt factors cancel out between numerator and denominator, and equation (4) simplifies to log λk(t|x0[0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = k and log λqk(t|x 0 [0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = ∅ (5) When a gradient-based optimization method adjusts θ to increase equation (5), the intuition is as follows. If x0t = k, the model intensity λk(t) is increased to explain why an event of type k occurred at this particular time t. If x0t = ∅, the model intensity λk(t) is decreased to explain why an event of type k did not actually occur at time t (it was merely a noise event xmt = k, for some m 6= 0). These cases achieve the same qualitative effects as following the gradients of the first and second terms, respectively, in the log-likelihood (2). Our full objective is an expectation of the sum of finitely many such log-ratios:6 JNC(θ) def = Ex0 [0,T ) ∼p∗,x1:M [0,T ) ∼q ∑ t:x0t 6=∅ log λ x0t (t|x0[0,t)) λ x0t (t|x0 [0,t) ) + M∑ m=1 ∑ t:xmt 6=∅ log λq xmt (t|x0[0,t)) λxmt (t|x0 [0,t) ) (6) where λk(t | x0[0,t)) def = λk(t | x0[0,t)) + Mλ q k(t | x0[0,t)). The expectation is estimated by sampling: we draw an observed stream x0[0,T ) from the training dataset, then draw noise events x 1:M [0,T ) from q conditioned on the prefixes (histories) given by this observed stream, as explained in the next section. Given these samples, the bracketed term is easy to compute (and we then use backprop to get its gradient w.r.t. θ, which is a stochastic gradient of the objective (6)). It eliminates the ∫ ∑ of equation (2) as desired, replacing it with a sum over the noise events. For each real or noise event, we compute only two intensities—the true and noise intensities of that event type at that time. 3.1 Efficient Sampling of Noise Events The thinning algorithm (Lewis & Shedler, 1979; Liniger, 2009) is a rejection sampling method for drawing an event stream over a given observation interval [0, T ) from a continuous-time autoregressive process. Suppose we have already drawn the first i − 1 times, namely t1, . . . , ti−1. For every future time t ≥ ti−1, let H(t) denote the context x[0,t) consisting only of the events at those times, and define λ(t | H(t)) def= ∑K k=1 λk(t | H(t)). If λ(t | H(t)) were constant at λ, we could draw the next event time as ti ∼ ti−1 + Exp(λ). We would then set xt = ∅ for all of the intermediate times t ∈ (ti−1, ti), and finally draw the type xti of the event at time ti, choosing k with probability λk(ti | H(t)) / λ. But what if λ(t | H(t)) is not constant? The thinning algorithm still runs the foregoing method, taking λ to be any upper bound: λ ≥ λ(t | H(t)) for all t ≥ ti−1. In this case, there may be “leftover” probability mass not allocated to any k. This mass is allocated to ∅. A draw of xti = ∅ means there was no event at time ti after all (corresponding to a rejected proposal). Either way, we now continue on to draw ti+1 and xti+1 , using a version ofH(t) that has been updated to include the event or non-event xti . The update toH(t) affects λ(t | H(t)) and the choice of λ. How to sample noise streams. To draw a stream xm[0,t) of noise events, we run the thinning algorithm, using the noise intensity functions λqk. However, there is a modification: H(t) is now defined to be x0[0,t)—the history from the observed event stream, rather than the previously sampled noise events—and is updated accordingly. This is because in equation (6), at each time t, all of {x0t , x1t , . . . , xMt } are conditioned on x0[0,t) (akin to the discrete-time case). 7 The full pseudocode is given in Algorithm 1 in the supplementary material. Coarse-to-fine sampling of event types. Although our NCE method has eliminated the need to integrate over t, the thinning algorithm above still sums over k in the definition of λq(t | H(t)). For large K, this sum is expensive if we take the noise distribution on each training minibatch to be, for example, the pθ with the current value of θ. That is a statistically efficient choice of noise distribution, but we can make a more computationally efficient choice. A simple scheme is to first generate each noise event with a coarse-grained type c ∈ {1, . . . , C}, and then stochastically choose a refinement k ∈ {1, . . . ,K}: λqk(t | x 0 [0,t)) def = C∑ c=1 q(k | c)λqc(t | x0[0,t)) for k = 1, 2, . . . ,K (7) This noise model is parameterized by the functions λqcand the probabilities q(k | c). The total intensity is now λq(t | H(t)) = ∑C c=1 λ q c(t), so we now need to examine onlyC intensity functions, not K, to choose λ in the thinning algorithm. If we partition the K types into C coarse-grained clusters (e.g., using domain knowledge), then evaluating the noise probability (7) within the training objective (6) is also fast because there is only one non-zero summand c in equation (7). This simple scheme works well in our experiments. However, it could be elaborated by replacing q(k | c) with q(k | c, x0[0,t)), by partitioning the event vocabulary automatically, by allowing overlapping clusters, or by using multiple levels of refinement: all of these elaborations are used by the fast hierarchical language model of Mnih & Hinton (2009). How to draw M streams. An efficient way to draw the union of M i.i.d. noise streams is to run the thinning algorithm once, with all intensities multiplied by M . In other words, the expected number of noise events on any interval is multiplied by M . This scheme does not tell us which specific noise stream m generated a particular noise event, but the NCE objective (6) does not need to know that. The scheme works only because every noise stream m has the same intensities λqk(t | x0[0,t)) (not λqk(t | xm[0,t))) at time t: there is no dependence on the previous events from that stream. Amusingly, NCE can now run even with non-integer M . Fractional objective. One view of the thinning algorithm is that it accepts the proposed time ti with probability µ = λ(ti)/λ, and in that case, labels it as k with probability λk(ti)/λ(ti). To get a greater diversity of noise samples, we can accept the time with probability 1, if we then scale its term in the objective (6) by µ. This does not change the expectation (6) but may reduce the sampling variance in estimating it. Note that increasing the upper bound λ now has an effect similar to increasing M : more noise samples.8 3.2 Computational Cost Analysis State-of-the-art intensity models use neural networks whose state summarizes the history and is updated after each event. So to train on a single event stream x with I ≥ 0 events, both MLE and NCE must perform I updates to the neural state. Both MLE and NCE then evaluate the intensities λk(t | x[0,t)) of these I events, and also the intensities of a number of events that did not occur, which almost surely fall at other times.9 Consider the number of intensities evaluated. For MLE, assume the Monte Carlo integration technique mentioned in section 2.2. MLE computes the intensity λ for I observed events and for all K possible events at each of J sampled times. We take J = ρI (with randomized rounding to an integer), where ρ > 0 is a hyperparameter (Mei & Eisner, 2017). Hence, the expected total number of intensity evaluations is I + ρIK. For NCE with the coarse-to-fine strategy, let J be the total number of times proposed by the thinning algorithm. Observe that E [I] = ∫ T 0 λ∗(t | x[0,t))dt, and E [J ] = M · ∫ T 0 λ(t | x[0,t))dt. Thus, E [J ] ≈ M · E [I] if (1) λ at any time is a tight upper bound on the noise event rate λq at that time and (2) the average noise event rate well-approximates the average observed event rate (which should become true very early in training). To label or reject each of the J proposals, NCE evaluates C noise intensities λqc ; if the proposal is accepted with label k (perhaps fractionally), it must also evaluate its model intensity λk. The noise and model intensities λqc and λk must also be evaluated for the I observed events. Hence, the total number of intensity evaluations is at most (C+1)J+2I , which ≈ (C + 1)MI + 2I in expectation. Dividing by I , we see that making (M + 1)(C + 1) ≤ ρK suffices to make NCE’s stochastic objective take less work per observed stream than MLE’s stochastic objective. M = 1 and C = 1 is a valid choice. But NCE’s objective is less informed for smaller M , so its stochastic gradient carries less information about θ∗. In section 5, we empirically investigate the effect of M and C on NCE and compare to MLE with different ρ. 3.3 Theoretical Guarantees: Optimality, Consistency and Efficiency The following theorem implies that stochastic gradient ascent on NCE converges to a correct θ (if one exists): Theorem 1 (Optimality). Under assumptions 1 and 2, θ ∈ argmaxθ JNC(θ) if and only if pθ = p∗. This theorem falls out naturally when we rearrange the NCE objective in equation (6) as∫ T t=0 ∑ x0 [0,t) p∗(x0[0,t)) K∑ k=1 λ∗k(t | x0[0,t)) ( λ∗k(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λk(t|x0[0,t)) λk(t|x0[0,t)) +M λqk(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λqk(t|x 0 [0,t)) λk(t|x0[0,t)) ) ︸ ︷︷ ︸ a negative cross entropy dt where λ∗k is the intensity under p ∗ and λ∗k is defined analogously to λk: see full derivation in Appendix B.1. Obviously, pθ = p∗ is sufficient to maximize the negative cross-entropy for any k given any history and thus maximize JNC(θ). It turns out to be also necessary because any θ for which pθ 6= p∗ would, given assumption 1, end up decreasing the negative cross-entropy for some k over some interval (t, t′) given a set of histories with non-zero measure. A full proof can be found in Appendix B.2: as we’ll see there, although it resembles Theorem 3.2 of Ma & Collins (2018), the proof of our Theorem 1 requires new analysis to handle continuous time, since Ma & Collins (2018) only worked on discrete-time sequential data. Moreover, our NCE method is strongly consistent for any M ≥ 1 and approaches Fisher efficiency when M is large. These properties are the same as in Ma & Collins (2018) and the proofs are also similar. Therefore, we leave the related theorems together with their assumptions and proofs to Appendices B.3 and B.4. 4 Related Work The original “binary classification” NCE principle was proposed by Gutmann & Hyvärinen (2010) to estimate parameters for joint models of the form pθ(x) ∝ exp(score(x, θ)). Gutmann & Hyvärinen (2012) applied it to natural image statistics. It was then widely applied to natural language processing problems such as language modeling (Mnih & Teh, 2012), learning word representations (Mikolov et al., 2013) and machine translation (Vaswani et al., 2013). The “ranking-based” variant (Jozefowicz et al., 2016)10 is better suited for conditional distributions (Ma & Collins, 2018), including those used in autoregressive models, and has shown strong performance in large-scale language modeling with recurrent neural networks. Guo et al. (2018) tried NCE on (univariate) point processes but used the binary classification version. They used discrimination problems of the form: “Is event k at time t′ the true next event following history x[0,t], or was it generated from a noise distribution?” Their classification-based NCE variant is not well-suited to conditional distributions (Ma & Collins, 2018): this complicates their method since they needed to build a parametric model of the local normalizing constant, giving them weaker theoretical guarantees and worse performance (see section 5). In contrast, we choose the rankingbased variant: our key idea of how to apply this to continuous time is new (see section 3) and requires new analysis (see Appendices A and B). 5 Experiments We evaluate our NCE method on several synthetic and real-world datasets, with comparison to MLE, Guo et al. (2018) (denoted as b-NCE), and least-squares estimation (LSE) (Eichler et al., 2017). bNCE has the same hyper-parameter M as our NCE, namely the number of noise events. LSE’s objective involves an integral over times [0, T ), so it has the same hyper-parameter ρ as MLE. On each of the datasets, we will show the estimated log-likelihood on the held-out data achieved by the models trained on the NCE, b-NCE, MLE and LSE objectives, as training consumes increasing amounts of computation—measured by the number of intensity evaluations and the elapsed wallclock time (in seconds).11 We always set the minibatch size B to exhaust the GPU capacity, so smaller ρ or M allows larger B. Larger B in turn increases the number of epochs per unit time (but decreases the possibly beneficial variance in the stochastic gradient updates). 5.1 Synthetic Datasets In this section, we work on two synthetic datasets with K = 10000 event types. We choose the neural Hawkes process (NHP) (Mei & Eisner, 2017) to be our model pθ.12 For the noise distribution q, we choose C = 1 and also parametrize its intensity function as a neural Hawkes process. The first dataset has sequences drawn from the randomly initialized q such that we can check how well our NCE method could perform with the “ground-truth” noise distribution q = p∗; the sequences of the second dataset were drawn from a randomly initialized neural Hawkes process to evaluate both methods in the case that the model family pθ is well-specified. We show (the zoomedin views of the interesting parts of) multiple learning curves on each dataset in Figure 1: NCE is observed to consume substantially fewer intensity evaluations and less wall-clock time than MLE to achieve competitive log-likelihood, while b-NCE and LSE are slower and only converge to lower log-likelihood. Note that the wall-clock time may not be proportional to the number of intensities because computing intensities is not all of the work (e.g., there are LSTM states of both pθ and q to compute and store on GPU). We also observed that models that achieved comparable log-likelihood—no matter how they were trained—achieved comparable prediction accuracies (measured by root-mean-square-error for time and error rate for type). Therefore, our NCE still beats other methods at converging quickly to the highest prediction accuracy. Ablation Study I: Always or Never Redraw Noise Samples. During training, for each observed data, we can choose to either redraw a new set of noise samples every time we train on it or keep reusing the old samples: we did the latter for Figure 1. In experiments doing the former, we observed better generation for tiny M (e.g., M = 1) but substantial slow-down (because of sampling) with no improved generalization for large M (e.g, 1000). Such results suggest that we always reuse old samples as long as M is reasonably large: it is then what we do for all other experiments throughout the paper. See Appendix D.4 for more details of this ablation study, including learning curves of the “always redraw” strategy in Figure 5. 5.2 Real-World Social Interaction Datasets with Large K We also evaluate the methods on several real-world social interaction datasets that have many event types: see Appendix D.1 for details (e.g, data statistics, pre-processing, data splits, etc). In this section, we show the learning curves on two particularly interesting datasets (explained below) in Figure 2 and leave those on the other datasets (which look similar) to Appendix D.3. EuroEmail (Paranjape et al., 2017). This dataset contains time-stamped emails between anonymized members of a European research institute. We work on a subset of 100 most active members and then end up with K = 10000 possible event types and 50000 training event tokens. BitcoinOTC (Kumar et al., 2016). This dataset contains time-stamped rating (positive/negative) records between anonymized users on the BitcoinOTC trading platform. We work on a subset of 100 most active users and then end up with K = 19800 (self-rating not allowed) possible event types but only 1000 training event tokens: this is an extremely data-sparse setting. On these datasets, our model pθ is still a neural Hawkes process. For the noise distribution q, we experiment with not only the coarse-to-fine neural process withC = 1 but also a homogeneous Poisson process. As shown in Figure 2, our NCE tends to perform better with the neural q: this is because a neural model can better fit the data and thus provide better training signals, analogous to how a good generator can benefit the discriminator in the generative adversarial framework (Goodfellow et al., 2014). NCE with Poisson q also shows benefits through the early and middle training stages, but it might suffer larger variance (e.g., Figure 2a2) and end up with slightly worse generalization (e.g., Figure 2b2). MLE with different ρ values all eventually achieve the highest log-likelihood (≈ −10 on EuroEmail and ≈ −15 on BitcoinOTC), but most of these runs are so slow that their peaks are out of the current views. The b-NCE runs with different M values are slower, achieve worse generalization and suffer larger variance than our NCE; interestingly, b-NCE prefers Poisson q to neural q (better generalization on EuroEmail and smaller variance on BitcoinOTC). In general, LSE is the slowest, and the highest log-likelihood it can achieve (≈ −30 on EuroEmail and ≈ −25 on BitcoinOTC) is lower than that of MLE and our NCE. Ablation Study II: Trained vs. Untrained q. The noise distributions (except the ground-truth q for Synthetic-1) that we have used so far were all pretrained on the same data as we train pθ. The training cost is cheap: e.g., on the datasets in this section, the actual wall-clock training time for the neural q is less than 2% of what is needed to train pθ, and training the Poisson q costs even less.1314 We also experimented with untrained noise distributions and they were observed to perform worse (e.g., worse generalization, slower convergence and larger variance). See Appendix D.5 for more details, including learning curves (Figure 6). 5.3 Real-World Dataset with Dynamic Facts In this section, we let pθ be a neural Datalog through time (NDTT) model (Mei et al., 2020). Such a model can be used in a domain in which new events dynamically update the set of event types and the structure of their intensity functions. We evaluate our method on training the domain-specific models presented by Mei et al. (2020), on the same datasets they used: RoboCup (Chen & Mooney, 2008). This dataset logs actions of robot players during RoboCup soccer games. The set of possible event types dynamically changes over time (e.g., only ball possessor can kick or pass) as the ball is frequently transferred between players (by passing or stealing). There are K = 528 event types over all time, but only about 20 of them are possible at any given time. IPTV (Xu et al., 2018). This dataset contains time-stamped records of 1000 users watching 49 TV programs over 2012. The users are not able to watch a program until it is released, so the number of event types grows from K = 0 to K = 49000 as programs are released one after another. The learning curves are displayed in Figure 3. On RoboCup, NCE only progresses faster than MLE at the early to middle training stages: M = 5 and M = 10 eventually achieved the highest loglikelihood at the same time as MLE andM = 1 ended up with worse generalization. On IPTV, NCE with M = 1 turned out to learn as well as and much faster than MLE. The dynamic architecture makes it hard to parallelize the intensity computation; MLE in particular performs poorly in wallclock time, and we needed a remarkably small ρ to let MLE finish within the shown time range. On both datasets, b-NCE and LSE drastically underperform MLE and NCE: their learning curves increase so slowly and achieve such poor generalization that only b-NCE with M = 5 and M = 10 are visible on the graphs. Ablation Study III: Effect of C. In the above figures, we used the coarse-to-fine neural model as q. On RoboCup, each action (kick, pass, etc.) has a coarse-grained intensity, so C = 5. On IPTV, we partition the event vocabulary by TV program, so C = 49. We also experimented with C = 1: this reduces the number of intensities computed during sampling on both datasets, but has (slightly) worse generalization on RoboCup (since q becomes less expressive). See Appendix D.6 for more details, including learning curves (Figure 7). 6 Conclusion We have introduced a novel instantiation of the general NCE principle for training a multivariate point process model. Our objective has the same optimal parameters as the log-likelihood objective (if the model is well-specified), but needs fewer expensive function evaluations and much less wallclock time in practice. This benefit is demonstrated on several synthetic and real-world datasets. Moreover, our method is provably consistent and efficient under mild assumptions. Broader Impact Our method is designed to train a multivariate point process for probabilistic modeling of event streams. By describing this method and releasing code, we hope to facilitate probabilistic modeling of continuous-time sequential data in many domains. Good probabilistic models make it possible to impute missing events, anticipate possible future events, and react accordingly. They can also be used in exploratory data analysis. In addition to making it more feasible and more convenient for domain experts to train complex models with many event types, our method reduces the energy cost necessary to do so. Examples of event streams with potential social impact include a person’s detailed food/exercise/sleep/medical event log, their social media interactions, their interactions with educational exercises or games, or their educational or workplace events (for time management and career planning); a customer’s interactions with a particular company or its website or other user interface; a company’s sales and purchases; geopolitical events, financial events, human activity modeling, music modeling, and dynamic resource requests. We are not aware of any negative broader impacts that might stem from publishing this work. Disclosure of Funding Sources This work was supported by a Ph.D. Fellowship Award to the first author by Bloomberg L.P. and a National Science Foundation Grant No. 1718846 to the last author, as well as two Titan X Pascal GPUs donated by NVIDIA Corporation and compute cycles from the Maryland Advanced Research Computing Center. Acknowledgments We thank the anonymous NeurIPS reviewers and meta-reviewer as well as Hongteng Xu for helpful comments on this paper.
1. What is the focus and contribution of the paper on point process? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and optimality? 3. What are the weaknesses of the paper, especially in comparison to other works in the field? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a noise-contrastive estimation for point process which is expected to compute efficiently. The authors also prove that optimality can be achieved under mild assumptions. Empirical experimental results are used to demonstrate its efficiency and usefulness. Strengths They develop a new learning algorithm for point process using the idea of contrastive noise estimation. Optimality and efficiency is guaranteed through theoretical analysis. Weaknesses As there are already methods like work of Guo et.al which speed up the learning process of point process, addition of the comparision with those methods makes the experiment more convincing.
NIPS
Title Noise-Contrastive Estimation for Multivariate Point Processes Abstract The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. N/A The log-likelihood of a generative model often involves both positive and negative terms. For a temporal multivariate point process, the negative term sums over all the possible event types at each time and also integrates over all the possible times. As a result, maximum likelihood estimation is expensive. We show how to instead apply a version of noise-contrastive estimation—a general parameter estimation method with a less expensive stochastic objective. Our specific instantiation of this general idea works out in an interestingly non-trivial way and has provable guarantees for its optimality, consistency and efficiency. On several synthetic and real-world datasets, our method shows benefits: for the model to achieve the same level of log-likelihood on held-out data, our method needs considerably fewer function evaluations and less wall-clock time. 1 Introduction Maximum likelihood estimation (MLE) is a popular training method for generative models. However, to obtain the likelihood of a generative model given the observed data, one must compute the probability of each observed sample, which often includes an expensive normalizing constant. For example, in a language model, each word is typically drawn from a softmax distribution over a large vocabulary, whose normalizing constant requires a summation over the vocabulary. This paper aims to alleviate a similar computational cost for multivariate point processes. These generative models are natural tools to analyze streams of discrete events in continuous time. Their likelihood is improved not only by raising the probability of the observed events, but by lowering the probabilities of the events that were observed not to occur. There are infinitely many times at which no event of any type occurred; to predict these non-occurrences, the likelihood must integrate the infinitesimal event probability for each event type over the entire observed time interval. Therefore, the likelihood is expensive to compute, particularly when there are many possible event types. As an alternative to MLE, we propose to train the model by learning to discriminate the observed events from events sampled from a noise process. Our method is a version of noise-contrastive estimation (NCE), which was originally developed for unnormalized (energy-based) distributions and then extended to conditional softmax distributions such as language models. To our best knowledge, we are the first to extend the method and its theoretical guarantees (for optimality, consistency and efficiency) to the context of multivariate point processes. We will also discuss similar efforts in related areas in section 4. On several datasets, our method shows compelling results. By evaluating fewer event intensities, training takes much less wall-clock time while still achieving competitive log-likelihood. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. 2 Preliminaries 2.1 Event Streams and Multivariate Point Processes Given a fixed time interval [0, T ), we may observe an event stream x[0,T ): at each continuous time t, the observation xt is one of the discrete types {∅, 1, . . . ,K} where ∅ means no event. An non-∅ observation is called an event. A generative model of an event stream is called a multivariate point process.∗ We wish to fit an autoregressive probability model to observed event streams. In a discrete-time autoregressive model, events would be generated from left to right, where xt is drawn from a distribution that depends on x0, . . . , xt−1. The continuous-time version still generates events from left to right,1 but at any specific time t we have p(xt = ∅) = 1, with only an infinitesimal probability of any event. (For a computationally practical sampling method, see section 3.1.) The model is a stochastic process defined by functions λk that determine a finite intensity λk(t | x[0,t)) ≥ 0 for each event type k 6= ∅ at each time t > 0. This intensity depends on the history of events x[0,t) that were drawn at times < t. It quantifies the instantaneous rate at time t of events of type k. That is, λk(t | x[0,t)) is the limit as dt→+ 0 of 1dt times the expected number of events of type k on the interval [t, t+ dt), where the expectation is conditioned on the history. As the event probabilities are infinitesimal, the times of the events are almost surely distinct. To ensure that we have a point process, the intensity functions must be chosen such that the total number of events on any bounded interval is almost surely finite. Models of this form include inhomogeneous Poisson processes (Daley & Vere-Jones, 2007), in which the intensity functions ignore the history, as well as (non-explosive) Hawkes processes (Hawkes, 1971) and their modern neural versions (Du et al., 2016; Mei & Eisner, 2017). Most models use intensity functions that are continuous between events. Our analysis requires only Assumption 1 (Continuity). For any event stream x[0,T ) and event type k ∈ {1, . . . ,K}, λk(t | x[0,t)) is Riemann integrable, i.e., bounded and continuous almost everywhere w.r.t. time t. 2.2 Maximum Likelihood Estimation: Usefulness and Difficulties In practice, we parameterize the intensity functions by θ. We write pθ for the resulting probability density over event streams. When learning θ from data, we make the conventional assumption that the true point process p∗ actually falls into the chosen model family: Assumption 2 (Existence). There exists at least one parameter vector θ∗ such that pθ∗ = p∗. Then as proved in Appendix A, such a θ∗ can be found as an argmax of JLL(θ) def = Ex[0,T )∼p∗ [ log pθ(x[0,T )) ] (1) Given assumption 1, the θ values that maximize JLL(θ) are exactly the set Θ∗ of values for which pθ = p ∗: any θ for which pθ 6= p∗ would end up with a strictly smaller JLL(θ) by increasing the cross entropy −p∗ log pθ over some interval (t, t′) for a set of histories with non-zero measure. If we modify equation (1) to take the expectation under the empirical distribution of event streams x[0,T ) in the training dataset, then JLL(θ) is proportional to the log-likelihood of θ. For any x[0,T ) that satisfies the condition in assumption 1, the log-density used in equation (1) can be expressed in terms of λk(t | x[0,t)): log pθ(x[0,T )) = ∑ t:xt 6=∅ log λxt(t | x[0,t))− ∫ T t=0 K∑ k=1 λk(t | x[0,t))dt (2) Notice that the second term lacks a log. It is expensive to compute in the following cases: • The total number of event types K is large, making ∑K k=1 slow. • The integral ∫ T t=0 is slow to estimate well, e.g., via a Monte Carlo estimate TJ ∑J j=1 ∑K k=1 λk(tj) where each tj is randomly sampled from the uniform distribution over [0, T ). • The chosen model architecture makes it hard to parallelize the λk(tj) computation over j and k. ∗This paper uses endnotes instead of footnotes. They are found at the start of the supplementary material. 2.3 Noise-Contrastive Estimation in Discrete Time For autoregressive models of discrete-time sequences, a similar computational inefficiency can be tackled by applying the principle of noise-contrastive estimation (Gutmann & Hyvärinen, 2010), as follows. For each history x0:t def = x0x1 . . . xt−1 in training data, NCE trains the model pθ to discriminate the actually observed datum xt from some noise samples whose distribution q is known. The intuition is: optimal performance is obtained if and only if pθ matches the true distribution p∗. More precisely, given a bag {x0t , x1t , . . . , xMt }, where exactly one element of the bag was drawn from p∗ and the rest drawn i.i.d. from q, consider the log-posterior probability (via Bayes’ Theorem2) that x0t was the one drawn from p ∗: log p∗(x0t |x0:t) ∏M m=1 q(x m t |x0:t)∑M m=0 p ∗(xmt |x0:t) ∏ m′ 6=m q(x m′ t |x0:t) (3) The “ranking” variant of NCE (Jozefowicz et al., 2016) substitutes pθ for p∗ in this expression, and seeks θ (e.g., by stochastic gradient ascent) to maximize the expectation of the resulting quantity when x0t is a random observation in training data, 3 x0:t is its history, and x1t , . . . , x M t are drawn i.i.d. from q(· | x0:t). This objective is really just conditional maximum log-likelihood on a supervised dataset of (M+1)way classification problems. Each problem presents an unordered set of M + 1 samples—one drawn from p∗ and the others drawn i.i.d. from q. The task is to guess which sample was drawn from p∗. Conditional MLE trains θ to maximize (in expectation) the log-probability that the model assigns to the correct answer. In the infinite-data limit, it will find θ (if possible) such that these logprobabilities match the true ones given by (3). For that, it is sufficient for θ to be such that pθ = p∗. Given assumption 2, Ma & Collins (2018) show that pθ = p∗ is also necessary, i.e., the NCE task is sufficient to find the true parameters. Although the NCE objective does not learn to predict the full observed sample xt as MLE does, but only to distinguish it from the M noise samples, their theorem implies that in expectation over all possible sets of M noise samples, it actually retains all the information (provided that M > 0 and q has support everywhere that p∗ does). This NCE objective is computationally cheaper than MLE when the distribution pθ(· | x0:t) is a softmax distribution over {1, . . . ,K} with large K. The reason is that the expensive normalizing constants in the numerator and denominator of equation (3) need not be computed. They cancel out because all the probabilities are conditioned on the same (actually observed) history. 3 Applying Noise-Contrastive Estimation in Continuous Time The expensive ∫ ∑ term in equation (2) is rather similar to a normalizing constant,4 as it sums over non-occurring events. We might try to avoid computing it5 by discretizing the time interval [0, T ) into finitely many intervals of width ∆ and applying NCE. In this case, we would be distinguishing the true sequence of events on an interval [i∆, (i + 1)∆) from corresponding noise sequences on the same interval, given the same (actually observed) history x[0,i∆). Unfortunately, the distribution pθ(· | x[0,i∆)) in the objective still involves an ∫ ∑ term where the integral is over [i∆, (i + 1)∆) and the inner sum is over k. The solution is to shrink the intervals to infinitesimal width dt. Then our log-posterior over each of them becomes log pθ(x 0 [t,t+dt) | x 0 [0,t)) ∏M m=1 q(x 0 [t,t+dt) | x 0 [0,t))∑M m=0 pθ(x m [t,t+dt) | x 0 [0,t)) ∏ m′ 6=m q(x m′ [t,t+dt) | x 0 [0,t)) (4) We will define the noise distribution q in terms of finite intensity functions λqk, like the ones λk that define pθ. As a result, at a given time t, there is only an infinitesimal probability that any of {x0t , x1t , . . . , xMt } is an event. Nonetheless, at each time t ∈ [0, T ), we will consider generating a noise event (for each m > 0) conditioned on the actually observed history x[0,t). Among these uncountably many times t, we may have some for which x0t 6= ∅ (the observed events), or where xmt 6= ∅ for some 1 ≤ m ≤M (the noise events). Almost surely, the set of times t with a real or noise event remains finite. Our NCE objective is the expected sum of equation (4) over all such times t in an event stream, when the stream is drawn uniformly from the set of streams in the training dataset—as in section 6—and the noise events are then drawn as above. Our objective ignores all other times t, as they provide no information about θ. After all, when x0t = · · · = xMt = ∅, the probability that x0t is the one drawn from the true model must be 1/(M + 1) by symmetry, regardless of θ. At these times, the ratio in equation (4) does reduce to 1/(M + 1), since all probabilities are 1. At the times t that we do consider, how do we compute equation (4)? Almost surely, exactly one of x0t , . . . , x M t is an event k for some k 6= ∅. As a result, exactly one factor in each product is infinitesimal (dt times the λk or λ q k intensity), and the other factors are 1. Thus, the dt factors cancel out between numerator and denominator, and equation (4) simplifies to log λk(t|x0[0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = k and log λqk(t|x 0 [0,t)) λk(t|x0[0,t))+Mλ q k(t|x 0 [0,t) ) if x0t = ∅ (5) When a gradient-based optimization method adjusts θ to increase equation (5), the intuition is as follows. If x0t = k, the model intensity λk(t) is increased to explain why an event of type k occurred at this particular time t. If x0t = ∅, the model intensity λk(t) is decreased to explain why an event of type k did not actually occur at time t (it was merely a noise event xmt = k, for some m 6= 0). These cases achieve the same qualitative effects as following the gradients of the first and second terms, respectively, in the log-likelihood (2). Our full objective is an expectation of the sum of finitely many such log-ratios:6 JNC(θ) def = Ex0 [0,T ) ∼p∗,x1:M [0,T ) ∼q ∑ t:x0t 6=∅ log λ x0t (t|x0[0,t)) λ x0t (t|x0 [0,t) ) + M∑ m=1 ∑ t:xmt 6=∅ log λq xmt (t|x0[0,t)) λxmt (t|x0 [0,t) ) (6) where λk(t | x0[0,t)) def = λk(t | x0[0,t)) + Mλ q k(t | x0[0,t)). The expectation is estimated by sampling: we draw an observed stream x0[0,T ) from the training dataset, then draw noise events x 1:M [0,T ) from q conditioned on the prefixes (histories) given by this observed stream, as explained in the next section. Given these samples, the bracketed term is easy to compute (and we then use backprop to get its gradient w.r.t. θ, which is a stochastic gradient of the objective (6)). It eliminates the ∫ ∑ of equation (2) as desired, replacing it with a sum over the noise events. For each real or noise event, we compute only two intensities—the true and noise intensities of that event type at that time. 3.1 Efficient Sampling of Noise Events The thinning algorithm (Lewis & Shedler, 1979; Liniger, 2009) is a rejection sampling method for drawing an event stream over a given observation interval [0, T ) from a continuous-time autoregressive process. Suppose we have already drawn the first i − 1 times, namely t1, . . . , ti−1. For every future time t ≥ ti−1, let H(t) denote the context x[0,t) consisting only of the events at those times, and define λ(t | H(t)) def= ∑K k=1 λk(t | H(t)). If λ(t | H(t)) were constant at λ, we could draw the next event time as ti ∼ ti−1 + Exp(λ). We would then set xt = ∅ for all of the intermediate times t ∈ (ti−1, ti), and finally draw the type xti of the event at time ti, choosing k with probability λk(ti | H(t)) / λ. But what if λ(t | H(t)) is not constant? The thinning algorithm still runs the foregoing method, taking λ to be any upper bound: λ ≥ λ(t | H(t)) for all t ≥ ti−1. In this case, there may be “leftover” probability mass not allocated to any k. This mass is allocated to ∅. A draw of xti = ∅ means there was no event at time ti after all (corresponding to a rejected proposal). Either way, we now continue on to draw ti+1 and xti+1 , using a version ofH(t) that has been updated to include the event or non-event xti . The update toH(t) affects λ(t | H(t)) and the choice of λ. How to sample noise streams. To draw a stream xm[0,t) of noise events, we run the thinning algorithm, using the noise intensity functions λqk. However, there is a modification: H(t) is now defined to be x0[0,t)—the history from the observed event stream, rather than the previously sampled noise events—and is updated accordingly. This is because in equation (6), at each time t, all of {x0t , x1t , . . . , xMt } are conditioned on x0[0,t) (akin to the discrete-time case). 7 The full pseudocode is given in Algorithm 1 in the supplementary material. Coarse-to-fine sampling of event types. Although our NCE method has eliminated the need to integrate over t, the thinning algorithm above still sums over k in the definition of λq(t | H(t)). For large K, this sum is expensive if we take the noise distribution on each training minibatch to be, for example, the pθ with the current value of θ. That is a statistically efficient choice of noise distribution, but we can make a more computationally efficient choice. A simple scheme is to first generate each noise event with a coarse-grained type c ∈ {1, . . . , C}, and then stochastically choose a refinement k ∈ {1, . . . ,K}: λqk(t | x 0 [0,t)) def = C∑ c=1 q(k | c)λqc(t | x0[0,t)) for k = 1, 2, . . . ,K (7) This noise model is parameterized by the functions λqcand the probabilities q(k | c). The total intensity is now λq(t | H(t)) = ∑C c=1 λ q c(t), so we now need to examine onlyC intensity functions, not K, to choose λ in the thinning algorithm. If we partition the K types into C coarse-grained clusters (e.g., using domain knowledge), then evaluating the noise probability (7) within the training objective (6) is also fast because there is only one non-zero summand c in equation (7). This simple scheme works well in our experiments. However, it could be elaborated by replacing q(k | c) with q(k | c, x0[0,t)), by partitioning the event vocabulary automatically, by allowing overlapping clusters, or by using multiple levels of refinement: all of these elaborations are used by the fast hierarchical language model of Mnih & Hinton (2009). How to draw M streams. An efficient way to draw the union of M i.i.d. noise streams is to run the thinning algorithm once, with all intensities multiplied by M . In other words, the expected number of noise events on any interval is multiplied by M . This scheme does not tell us which specific noise stream m generated a particular noise event, but the NCE objective (6) does not need to know that. The scheme works only because every noise stream m has the same intensities λqk(t | x0[0,t)) (not λqk(t | xm[0,t))) at time t: there is no dependence on the previous events from that stream. Amusingly, NCE can now run even with non-integer M . Fractional objective. One view of the thinning algorithm is that it accepts the proposed time ti with probability µ = λ(ti)/λ, and in that case, labels it as k with probability λk(ti)/λ(ti). To get a greater diversity of noise samples, we can accept the time with probability 1, if we then scale its term in the objective (6) by µ. This does not change the expectation (6) but may reduce the sampling variance in estimating it. Note that increasing the upper bound λ now has an effect similar to increasing M : more noise samples.8 3.2 Computational Cost Analysis State-of-the-art intensity models use neural networks whose state summarizes the history and is updated after each event. So to train on a single event stream x with I ≥ 0 events, both MLE and NCE must perform I updates to the neural state. Both MLE and NCE then evaluate the intensities λk(t | x[0,t)) of these I events, and also the intensities of a number of events that did not occur, which almost surely fall at other times.9 Consider the number of intensities evaluated. For MLE, assume the Monte Carlo integration technique mentioned in section 2.2. MLE computes the intensity λ for I observed events and for all K possible events at each of J sampled times. We take J = ρI (with randomized rounding to an integer), where ρ > 0 is a hyperparameter (Mei & Eisner, 2017). Hence, the expected total number of intensity evaluations is I + ρIK. For NCE with the coarse-to-fine strategy, let J be the total number of times proposed by the thinning algorithm. Observe that E [I] = ∫ T 0 λ∗(t | x[0,t))dt, and E [J ] = M · ∫ T 0 λ(t | x[0,t))dt. Thus, E [J ] ≈ M · E [I] if (1) λ at any time is a tight upper bound on the noise event rate λq at that time and (2) the average noise event rate well-approximates the average observed event rate (which should become true very early in training). To label or reject each of the J proposals, NCE evaluates C noise intensities λqc ; if the proposal is accepted with label k (perhaps fractionally), it must also evaluate its model intensity λk. The noise and model intensities λqc and λk must also be evaluated for the I observed events. Hence, the total number of intensity evaluations is at most (C+1)J+2I , which ≈ (C + 1)MI + 2I in expectation. Dividing by I , we see that making (M + 1)(C + 1) ≤ ρK suffices to make NCE’s stochastic objective take less work per observed stream than MLE’s stochastic objective. M = 1 and C = 1 is a valid choice. But NCE’s objective is less informed for smaller M , so its stochastic gradient carries less information about θ∗. In section 5, we empirically investigate the effect of M and C on NCE and compare to MLE with different ρ. 3.3 Theoretical Guarantees: Optimality, Consistency and Efficiency The following theorem implies that stochastic gradient ascent on NCE converges to a correct θ (if one exists): Theorem 1 (Optimality). Under assumptions 1 and 2, θ ∈ argmaxθ JNC(θ) if and only if pθ = p∗. This theorem falls out naturally when we rearrange the NCE objective in equation (6) as∫ T t=0 ∑ x0 [0,t) p∗(x0[0,t)) K∑ k=1 λ∗k(t | x0[0,t)) ( λ∗k(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λk(t|x0[0,t)) λk(t|x0[0,t)) +M λqk(t|x 0 [0,t)) λ∗k(t|x 0 [0,t) ) log λqk(t|x 0 [0,t)) λk(t|x0[0,t)) ) ︸ ︷︷ ︸ a negative cross entropy dt where λ∗k is the intensity under p ∗ and λ∗k is defined analogously to λk: see full derivation in Appendix B.1. Obviously, pθ = p∗ is sufficient to maximize the negative cross-entropy for any k given any history and thus maximize JNC(θ). It turns out to be also necessary because any θ for which pθ 6= p∗ would, given assumption 1, end up decreasing the negative cross-entropy for some k over some interval (t, t′) given a set of histories with non-zero measure. A full proof can be found in Appendix B.2: as we’ll see there, although it resembles Theorem 3.2 of Ma & Collins (2018), the proof of our Theorem 1 requires new analysis to handle continuous time, since Ma & Collins (2018) only worked on discrete-time sequential data. Moreover, our NCE method is strongly consistent for any M ≥ 1 and approaches Fisher efficiency when M is large. These properties are the same as in Ma & Collins (2018) and the proofs are also similar. Therefore, we leave the related theorems together with their assumptions and proofs to Appendices B.3 and B.4. 4 Related Work The original “binary classification” NCE principle was proposed by Gutmann & Hyvärinen (2010) to estimate parameters for joint models of the form pθ(x) ∝ exp(score(x, θ)). Gutmann & Hyvärinen (2012) applied it to natural image statistics. It was then widely applied to natural language processing problems such as language modeling (Mnih & Teh, 2012), learning word representations (Mikolov et al., 2013) and machine translation (Vaswani et al., 2013). The “ranking-based” variant (Jozefowicz et al., 2016)10 is better suited for conditional distributions (Ma & Collins, 2018), including those used in autoregressive models, and has shown strong performance in large-scale language modeling with recurrent neural networks. Guo et al. (2018) tried NCE on (univariate) point processes but used the binary classification version. They used discrimination problems of the form: “Is event k at time t′ the true next event following history x[0,t], or was it generated from a noise distribution?” Their classification-based NCE variant is not well-suited to conditional distributions (Ma & Collins, 2018): this complicates their method since they needed to build a parametric model of the local normalizing constant, giving them weaker theoretical guarantees and worse performance (see section 5). In contrast, we choose the rankingbased variant: our key idea of how to apply this to continuous time is new (see section 3) and requires new analysis (see Appendices A and B). 5 Experiments We evaluate our NCE method on several synthetic and real-world datasets, with comparison to MLE, Guo et al. (2018) (denoted as b-NCE), and least-squares estimation (LSE) (Eichler et al., 2017). bNCE has the same hyper-parameter M as our NCE, namely the number of noise events. LSE’s objective involves an integral over times [0, T ), so it has the same hyper-parameter ρ as MLE. On each of the datasets, we will show the estimated log-likelihood on the held-out data achieved by the models trained on the NCE, b-NCE, MLE and LSE objectives, as training consumes increasing amounts of computation—measured by the number of intensity evaluations and the elapsed wallclock time (in seconds).11 We always set the minibatch size B to exhaust the GPU capacity, so smaller ρ or M allows larger B. Larger B in turn increases the number of epochs per unit time (but decreases the possibly beneficial variance in the stochastic gradient updates). 5.1 Synthetic Datasets In this section, we work on two synthetic datasets with K = 10000 event types. We choose the neural Hawkes process (NHP) (Mei & Eisner, 2017) to be our model pθ.12 For the noise distribution q, we choose C = 1 and also parametrize its intensity function as a neural Hawkes process. The first dataset has sequences drawn from the randomly initialized q such that we can check how well our NCE method could perform with the “ground-truth” noise distribution q = p∗; the sequences of the second dataset were drawn from a randomly initialized neural Hawkes process to evaluate both methods in the case that the model family pθ is well-specified. We show (the zoomedin views of the interesting parts of) multiple learning curves on each dataset in Figure 1: NCE is observed to consume substantially fewer intensity evaluations and less wall-clock time than MLE to achieve competitive log-likelihood, while b-NCE and LSE are slower and only converge to lower log-likelihood. Note that the wall-clock time may not be proportional to the number of intensities because computing intensities is not all of the work (e.g., there are LSTM states of both pθ and q to compute and store on GPU). We also observed that models that achieved comparable log-likelihood—no matter how they were trained—achieved comparable prediction accuracies (measured by root-mean-square-error for time and error rate for type). Therefore, our NCE still beats other methods at converging quickly to the highest prediction accuracy. Ablation Study I: Always or Never Redraw Noise Samples. During training, for each observed data, we can choose to either redraw a new set of noise samples every time we train on it or keep reusing the old samples: we did the latter for Figure 1. In experiments doing the former, we observed better generation for tiny M (e.g., M = 1) but substantial slow-down (because of sampling) with no improved generalization for large M (e.g, 1000). Such results suggest that we always reuse old samples as long as M is reasonably large: it is then what we do for all other experiments throughout the paper. See Appendix D.4 for more details of this ablation study, including learning curves of the “always redraw” strategy in Figure 5. 5.2 Real-World Social Interaction Datasets with Large K We also evaluate the methods on several real-world social interaction datasets that have many event types: see Appendix D.1 for details (e.g, data statistics, pre-processing, data splits, etc). In this section, we show the learning curves on two particularly interesting datasets (explained below) in Figure 2 and leave those on the other datasets (which look similar) to Appendix D.3. EuroEmail (Paranjape et al., 2017). This dataset contains time-stamped emails between anonymized members of a European research institute. We work on a subset of 100 most active members and then end up with K = 10000 possible event types and 50000 training event tokens. BitcoinOTC (Kumar et al., 2016). This dataset contains time-stamped rating (positive/negative) records between anonymized users on the BitcoinOTC trading platform. We work on a subset of 100 most active users and then end up with K = 19800 (self-rating not allowed) possible event types but only 1000 training event tokens: this is an extremely data-sparse setting. On these datasets, our model pθ is still a neural Hawkes process. For the noise distribution q, we experiment with not only the coarse-to-fine neural process withC = 1 but also a homogeneous Poisson process. As shown in Figure 2, our NCE tends to perform better with the neural q: this is because a neural model can better fit the data and thus provide better training signals, analogous to how a good generator can benefit the discriminator in the generative adversarial framework (Goodfellow et al., 2014). NCE with Poisson q also shows benefits through the early and middle training stages, but it might suffer larger variance (e.g., Figure 2a2) and end up with slightly worse generalization (e.g., Figure 2b2). MLE with different ρ values all eventually achieve the highest log-likelihood (≈ −10 on EuroEmail and ≈ −15 on BitcoinOTC), but most of these runs are so slow that their peaks are out of the current views. The b-NCE runs with different M values are slower, achieve worse generalization and suffer larger variance than our NCE; interestingly, b-NCE prefers Poisson q to neural q (better generalization on EuroEmail and smaller variance on BitcoinOTC). In general, LSE is the slowest, and the highest log-likelihood it can achieve (≈ −30 on EuroEmail and ≈ −25 on BitcoinOTC) is lower than that of MLE and our NCE. Ablation Study II: Trained vs. Untrained q. The noise distributions (except the ground-truth q for Synthetic-1) that we have used so far were all pretrained on the same data as we train pθ. The training cost is cheap: e.g., on the datasets in this section, the actual wall-clock training time for the neural q is less than 2% of what is needed to train pθ, and training the Poisson q costs even less.1314 We also experimented with untrained noise distributions and they were observed to perform worse (e.g., worse generalization, slower convergence and larger variance). See Appendix D.5 for more details, including learning curves (Figure 6). 5.3 Real-World Dataset with Dynamic Facts In this section, we let pθ be a neural Datalog through time (NDTT) model (Mei et al., 2020). Such a model can be used in a domain in which new events dynamically update the set of event types and the structure of their intensity functions. We evaluate our method on training the domain-specific models presented by Mei et al. (2020), on the same datasets they used: RoboCup (Chen & Mooney, 2008). This dataset logs actions of robot players during RoboCup soccer games. The set of possible event types dynamically changes over time (e.g., only ball possessor can kick or pass) as the ball is frequently transferred between players (by passing or stealing). There are K = 528 event types over all time, but only about 20 of them are possible at any given time. IPTV (Xu et al., 2018). This dataset contains time-stamped records of 1000 users watching 49 TV programs over 2012. The users are not able to watch a program until it is released, so the number of event types grows from K = 0 to K = 49000 as programs are released one after another. The learning curves are displayed in Figure 3. On RoboCup, NCE only progresses faster than MLE at the early to middle training stages: M = 5 and M = 10 eventually achieved the highest loglikelihood at the same time as MLE andM = 1 ended up with worse generalization. On IPTV, NCE with M = 1 turned out to learn as well as and much faster than MLE. The dynamic architecture makes it hard to parallelize the intensity computation; MLE in particular performs poorly in wallclock time, and we needed a remarkably small ρ to let MLE finish within the shown time range. On both datasets, b-NCE and LSE drastically underperform MLE and NCE: their learning curves increase so slowly and achieve such poor generalization that only b-NCE with M = 5 and M = 10 are visible on the graphs. Ablation Study III: Effect of C. In the above figures, we used the coarse-to-fine neural model as q. On RoboCup, each action (kick, pass, etc.) has a coarse-grained intensity, so C = 5. On IPTV, we partition the event vocabulary by TV program, so C = 49. We also experimented with C = 1: this reduces the number of intensities computed during sampling on both datasets, but has (slightly) worse generalization on RoboCup (since q becomes less expressive). See Appendix D.6 for more details, including learning curves (Figure 7). 6 Conclusion We have introduced a novel instantiation of the general NCE principle for training a multivariate point process model. Our objective has the same optimal parameters as the log-likelihood objective (if the model is well-specified), but needs fewer expensive function evaluations and much less wallclock time in practice. This benefit is demonstrated on several synthetic and real-world datasets. Moreover, our method is provably consistent and efficient under mild assumptions. Broader Impact Our method is designed to train a multivariate point process for probabilistic modeling of event streams. By describing this method and releasing code, we hope to facilitate probabilistic modeling of continuous-time sequential data in many domains. Good probabilistic models make it possible to impute missing events, anticipate possible future events, and react accordingly. They can also be used in exploratory data analysis. In addition to making it more feasible and more convenient for domain experts to train complex models with many event types, our method reduces the energy cost necessary to do so. Examples of event streams with potential social impact include a person’s detailed food/exercise/sleep/medical event log, their social media interactions, their interactions with educational exercises or games, or their educational or workplace events (for time management and career planning); a customer’s interactions with a particular company or its website or other user interface; a company’s sales and purchases; geopolitical events, financial events, human activity modeling, music modeling, and dynamic resource requests. We are not aware of any negative broader impacts that might stem from publishing this work. Disclosure of Funding Sources This work was supported by a Ph.D. Fellowship Award to the first author by Bloomberg L.P. and a National Science Foundation Grant No. 1718846 to the last author, as well as two Titan X Pascal GPUs donated by NVIDIA Corporation and compute cycles from the Maryland Advanced Research Computing Center. Acknowledgments We thank the anonymous NeurIPS reviewers and meta-reviewer as well as Hongteng Xu for helpful comments on this paper.
1. What is the focus and contribution of the paper regarding noise contrastive estimation? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical grounding and experimental evaluation? 3. What are the weaknesses of the paper, such as the lack of discussion and comparison of other methods? 4. How does the reviewer assess the novelty and significance of the paper's contributions? 5. Are there any suggestions for improving the paper, such as including a broader discussion of related works or providing more informative plots?
Summary and Contributions Strengths Weaknesses
Summary and Contributions While the authors' response addressed some of my concerns it is not enough to raise my rating. The paper describes how noise contrastive estimation can be used to train generative models of multivariate point-processes in continuous time. The authors show that training is faster, and in most cases of similar quality, than training with maximum likelihood estimation. The authors proof that under mild assumptions their method fulfils theoretical guarantees and converges to the true parameters for infinite data. They apply the method to multiple synthetic and real datasets and show how different parameters affect the outcome of the training and perform ablation studies. The authors discuss related works and give their thoughts on the broader impact of their work. Strengths The paper is clearly written, describes well what is needed to change NCE to work for multivariate point processes and gives enough information to fully reproduce their results. It is a good contribution and the authors provide ample theoretical grounding for their claims and evaluate their method on a range of datasets. They provide multiple ablation studies and discuss their choice of parameters in detail, especially in the supplementary material. Weaknesses I was missing a discussion and comparison of other ways to approximate the log likelihood, e.g. variational approximations or monte carlo estimates. It would also be interesting to see what simple baseline log likelihood models would have achieved on the data. The authors show that for some of the data using a Poisson process as q achieves very good results but not if assuming p to be a simpler model would work as well. In general the related works section could be a bit broader to touch on methods beyond NCE. While the authors compare runs of NCE with different values for parameters like C and M, it would have been more informative to show a plot of the relationship of these parameters to convergence speed directly, instead of just having multiple runs in the same likelihood plot. I think it is a good contribution but not a huge step from prior work on NCE for point processes. Given that the main advantage over training with MLE is the computational complexity it would also be nice to have shown its results on data where MLE is not feasible.
NIPS
Title Toward Understanding Privileged Features Distillation in Learning-to-Rank Abstract In learning-to-rank problems, a privileged feature is one that is available during model training, but not available at test time. Such features naturally arise in merchandised recommendation systems; for instance, “user clicked this item” as a feature is predictive of “user purchased this item” in the offline data, but is clearly not available during online serving. Another source of privileged features is those that are too expensive to compute online but feasible to be added offline. Privileged features distillation (PFD) refers to a natural idea: train a “teacher” model using all features (including privileged ones) and then use it to train a “student” model that does not use the privileged features. In this paper, we first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon’s logs. We show that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self-distillation, and generalized distillation) on all these datasets. Next, we analyze why and when PFD performs well via both empirical ablation studies and theoretical analysis for linear models. Both investigations uncover an interesting non-monotone behavior: as the predictive power of a privileged feature increases, the performance of the resulting student model initially increases but then decreases. We show the reason for the later decreasing performance is that a very predictive privileged teacher produces predictions with high variance, which lead to high variance student estimates and inferior testing performance. 1 Introduction For recommendation systems, the features at test time are typically a subset of features available during training. Those missing features at test time are either too expensive to compute in real-time, or they are post-event features. For instance, for an e-commerce website, “click” is a strong feature for predicting “purchase”, but “click” exists as a feature only in the offline training data, but not during online serving (i.e., one cannot observe “click” before recommendations are generated). Those features that exist only during training are called privileged features. Those that exist during both training and testing are called regular features [XLG+20]. The naive approach is to ignore the privileged features and train a model that only takes regular features. Such methods inevitably miss the information in the privileged features and lead to inferior ⇤This work was done while Shuo Yang was interning at Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). performance. A natural instinct to resolve this is to (a) use the privileged features (either by themselves [LPSBV16] or in conjunction with regular features [XLG+20]) to train a “teacher” model, and then (b) use it to transfer information via distillation2 into a “student” model that only uses the regular features. The approach of a teacher only using privileged features is named generalized distillation (GenD) [LPSBV16], and the approach of a teacher using both privileged and regular features has been referred to as privileged feature distillation (PFD) [XLG+20]. In this paper we provide a detailed investigation – first via empirical ablation studies on moderatescale public and industrial-scale proprietary datasets with deep-learning-to-rank models, and second via rigorous theoretical analysis on simple linear models – into why and when privileged feature distillation works and when it does not. While this paper focuses on learning-to-rank, our results apply to regression/classification problems in general. As a summary, our main contributions are: • We evaluate PFD on three moderate-scale public ranking datasets: Yahoo, Istella, and MSLRWeb30k, and an industrial-scale proprietary dataset derived from Amazon search logs. • In all evaluated settings, PFD is better than or as good as the baselines: no-distillation, GenD (teacher model only uses privileged features), self-distillation (teacher model only uses regular features), and pretraining on privileged features then finetuning (when applicable) (Table 2). • We conduct comprehensive ablation studies for PFD. We find that – PFD is effective as long as the teacher loss dominates the distillation loss and the performance is not sensitive to ↵. Specifically, distillation loss is a linear combination of the loss w.r.t. data and the loss w.r.t. teacher predictions and ↵ is the mixing ratio (Figure 3). – While it is known that the gains from self-distillation (over a no-distillation one-shot training baseline) are larger when the positive labels are sparser, we see that these gains are further amplified by PFD; i.e. the relative gain of PFD over self-distillation also increases as the labels become sparser (Figure 4). – Non-monotonicity in the effectiveness of PFD: as the predictive power of a privileged feature increases, the resulting student performance initially increases but then decreases (Figure 5). • To provide a deeper insight into the landscape of privileged features and distillation, we next rigorously analyze it in a stylized setting involving linear models. We show that – PFD works because the teacher can explain away the variance arising from the privileged features, thus allowing the student to focus on the part it can predict. (Theorem 1). – The reason that GenD is inferior to PFD (as seen in our empirical evaluation) is because it results in a weaker teacher, and also because in the case where the privileged and regular features are independent, the teacher predictions appear as pure noise to the student (who cannot learn from them) (Remark 2). – A very predictive privileged feature induces high variance teacher predictions, which lead to inaccurate student estimates and inferior testing performance. This explains the observation that the most predictive privileged features do not give the best performance (i.e., the nonmonotonicity) in our empirical ablation studies (Theorem 2). The rest of the paper is organized as follows: Section 2 covers related works. Section 3 introduces the problem setup, the PFD algorithm and other algorithms for comparison. Section 4 presents empirical evaluation and ablation studies of PFD; and Section 5 presents theoretical insights. 2 Related Work Privileged features widely exist in different machine learning problems, including speech recognization [MM16], medical imaging [GCA+19], image super-resolution [LLKH20], etc [FA12, FTRS13, FKSH14, ALL17]. Privileged features are not accessible during testing either because they are too expensive to compute in real time, or because they are post-event features (thus cannot be used as input) [CM18]. Learning with privileged features is pioneered in [VV09], where they propose a framework named “learning using privileged information” (LUPI). At the core, LUPI uses privileged information to 2Here, by distillation we mean the standard practice of labeling the training dataset using teacher predictions, and using these as supervision targets in the training of the student model. distinguish between easy and hard examples. The methods are thus closely related to SVM, as the hardness of an example can be expressed by the slack variable. For instance, [VV09, PIVV10] propose the “SVM+” algorithm which generates slack variables from privileged features and learns an SVM based on regular features with those slack variables; [SQL13] proposes a pair-wise SVM algorithm for ranking, which uses privileged features to distinguish easy and hard pairs. [LHS14] presents a variation where the privileged features are used to generate importance weighting for different training samples. Empirically, [SER14] demonstrates that whether LUPI is effective critically depends on experimental settings (e.g., preprocessing, training/validation split, etc). [VI15] considers transferring the kernel function from a teacher SVM that only uses privileged features to a student SVM that only uses regular features; [LDX+20] extends the SVM+ algorithm to imperfect privileged features. Model distillation [HVD+15] is a common method for knowledge transfer, typically from a large model to a smaller one [PPA18, GYMT21]. Recent works have shown great empirical success in ranking problems [TW18, HAS+20, RPM+21] and even the cases where the teacher model and student model have the identical structure [FLT+18, QYT+21]. Using distillation to learn from privileged features are first proposed in [LPSBV16] as “generalized distillation” (GenD). It provides a unified view of LUPI and distillation. GenD, along with the variants [MM16, GMM19, LLKH20], train a teacher model with only privileged features and then train a student model to mimic the teacher’s predictions. PFD is recently proposed in [XLG+20], where the teacher model takes both regular and privileged features as input. PFD and GenD differ from the standard model distillation as they focus on exploiting privileged features but not on reducing the model size. [XLG+20] empirically demonstrates the superior performance of PFD for recommendation systems on a non-public data set. Understanding of privileged features distillation is lacking, despite the aforementioned empirical success. Previously, [PV10] shows that LUPI brings faster convergence under a strong assumption that the best classifier is realizable with only privileged features. [LPSBV16] shows that GenD enjoys a fast convergence rate. It assumes that the teacher model has a much smaller function class complexity than the student model, which does not match with PFD. [GCFY18] studies GenD under semi-supervised learning and shows that the benefits come from student function class complexity reduction. However, it does not quantify such reduction and the theory does not explain what is the benefit of using privileged features. To the best of our knowledge, there is no empirical or theoretical study explaining why PFD is effective. Other ways of utilizing privileged features are also previously proposed. [CJFY17] uses privileged information to learn a more diverse representation to improve image classification performance. [LLKH20, WZW+21] propose distillation schemes for better feature extraction from regular features. A more recent work [CJKB22] considers training a model with both regular and privileged features to obtain a better internal representation of the regular features. 3 Problem Setup and Algorithms Consider a learning-to-rank problem where each query-document pair has features x 2 X and z 2 Z and a label y 2 Y (e.g., click or human-annotated relevance) drawn from an unknown distribution D(y|x, z). Suppose x is the regular feature that is available during both training and testing and z is only available during training. Concretely, privileged feature is defined in the literature as below: Definition 1 (Privileged Feature [CJKB22]). For feature z that exists during training but not testing, we say z is a privileged feature if and only if I(y; z|x) := H(y|x) H(y|x, z) > 0. Conditional mutual information I(y; z|x) and conditional entropy H(·|·) follow from the standard notation of information theory. According to Definition 1, the privileged feature z provides extra predictive power of y. For the rest of this paper, we focus on the setting that z is a privileged feature. Remark 1. An implication of Definition 1 is that the privileged feature z can be independent of the regular feature x. In such cases, any transformation of z is not learnable from x, and therefore using z as auxiliary learning target does not help. Interestingly, PFD can still improve the student performance, even when z and x are independent (see Section 5). We consider the following general learning problem: we are given a labeled training set of size n, Slabel := {(xi, zi, yi)}i2[n], and a unlabeled training set of size m, Sunlabel := {(xi, zi)}i2[m]. Our goal is to generate good ranking based only on regular features x. For clarity of exposition, we only consider pointwise scoring functions F := {f | f : X 7! Y}, which generates a score for each document, and the ranking is induced by sorting the scores. The results in this paper can be easily extended to models beyond pointwise scoring functions (e.g., DASALC [QYZ+21]). The distinction between labeled and unlabeled datasets is for generality. The unlabeled dataset naturally appears in recommendation systems, where the majority of search logs do not contain any user interactions. Instead of taking all such logs as negative samples, it is more proper to view them as unlabeled data due to the lack of user engagement. For the logs that contain click, the documents therein with no click can be treated as negative samples. 3.1 Privileged features distillation PFD first trains a teacher model that takes both x and z as input to predict y, i.e., teacher function class is GPFD := {g | g : X ⇥ Z 7! Y}. For simplicity, we consider pointwise loss l : Y ⇥ Y 7! R in this section, while the method can be easily extended to other loss functions (see an extension to pairwise loss in Section 4). The privileged features distillation takes the following two steps: Step I: Training a teacher model gPFD 2 GPFD by minimizing the loss on the labeled dataset:P (xi,zi,yi)2Slabel l (g(xi, zi), yi). In practice, gradient-based optimizer is used for loss minimization. Step II: Training a student model by distillation. The teacher model gPFD trained from Step I is used to generate pseudo labels on Slabel and Sunlabel. Let Sall denote the union of Slabel and Sunlabel. The student model is trained by minimizing the following distillation loss: ↵ · X (xi,yi)2Slabel l(f(xi), yi) | {z } data loss + (1 ↵) · X (xi,zi)2Sall l (f(xi), gPFD(xi, zi)) | {z } teacher loss , (1) where ↵ 2 (0, 1) controls the mixing ratio between the data loss and teacher loss. The student model is trained by minimizing the distillation loss in Equation (1). 3.2 Other algorithms for comparisons Here we introduce two other algorithms for comparison. See illustration in Figure 1. GenD [LPSBV16] is a distillation method where the teacher model takes only privileged features as input, i.e., the teacher function class is GGenD = {g | g : Z 7! Y}. The teacher model gGenD 2 GGenD is obtained by minimizing P (zi,yi)2Slabel l (g(zi), yi). Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Self-distillation [FLT+18, QYT+21] is a distillation method where the teacher model has the same structure as the student model. Specifically, the teacher model gself-dist. 2 F is obtained by minimizingP (xi,yi)2Slabel l (g(xi), yi). Notice that F is also the student function class. Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Comparing PFD against self-distillation separates the benefits of adopting privileged features and distillation. 4 Experiments 4.1 Main results on public datasets We first evaluate the performance of PFD on three widely used public ranking datasets. Specifically, we use the Set1 from “Yahoo! Learn to rank challenge” [CC11]; “Istella Learning to Rank” dataset [DLN+16]; and Microsoft Learning to Rank “MSLR-Web30k” dataset [QL13]. We refer to them as “Yahoo”, “Istella” and “Web30k” throughout this section. Datasets overview and preprocessing. The training samples in all three datasets can be viewed as query groups, where each query group contains 1 query and multiple documents to be ranked. Each query-document pair is represented as a real-value feature vector (e.g., user dwelling time, tf-idf of document, etc. See [CC11] for detail). Further, each query-document pair has a human-annotated relevance score r 2 {0, 1, 2, 3, 4}. All datasets are preprocessed by removing query groups that contain no positive relevance score or have less than 10 documents. The features are transformed by the log1p transformation as in [ZWBN20, QYZ+21]. Binary label generation. In practice, binary label (e.g., click) is more commonly seen and easier to obtain than relevance score. For our experiments, we generate a binary label y for each querydocument pair based on the human-annotated relevance score r. Specifically: y = I (t · r +G1 > t · ⌧target +G0) , (2) where t is a temperature parameter and G1 and G0 follow the standard Gumbel distribution. It can be shown that y is 1 with probability (t · (r ⌧target)), where (·) is the sigmoid function (see Appendix A.1 for proof). For the rest of our experiments, we set t = 4 and ⌧target = 4.8 unless otherwise mentioned. We refer to the query groups that contain at least one y = 1 to be positive query groups, and other query groups are referred to as negative query groups. Regular and privileged features split. For each of the datasets, we sort the features according to the magnitude of their correlations with the binary label y and use the top 200, 50, and 40 features as privileged features for Yahoo, Istella, and Web30k, respectively. Other features are used as regular features. Please see Table 1 for dataset statistics after preprocessing and binary label generation. Ranking model and performance metric. The ranking model is a 5-layer fully connected neural network, which maps the query-document feature into a real-value score s 2 [0, 1]. The ranking b⇡ of documents is obtained by sorting the scores decreasingly, where b⇡(i) represents the ranked order of the i-th document. The ranking performance is measured by the NDCG@k metric: NDCG@k(b⇡,y) = DCG@k(b⇡,y) DCG@k(⇡⇤,y) , DCG@k(⇡,y) = X ⇡(i)k 2yi 1 log2(1 + ⇡(i)) , where ⇡⇤ is the optimal ranking obtained by sorting yi. PFD is effective for all three datasets. We evaluate the efficacy of PFD on all three aforementioned datasets, under both pointwise (RankBCE) and pairwise (RankNet [BSR+05]) loss functions (see definitions in Appendix A.2). Please see the evaluated algorithms and results in Table 2 (complete results with RankNet loss deferred to Table 4). Figure 2 shows the testing NDCG@8 curve on Yahoo and Web30k with RankBCE loss. Table 2 shows that PFD has the best performance on all evaluated settings. We remark that (1) the only difference between PFD and self-distillation is that the teacher in PFD additionally uses privileged features and therefore has better prediction accuracy than the teacher in self-distillation. Comparing PFD with self-distillation reveals the improvement of using “privileged features” for distillation; (2) the performance of GenD is worse than no-distillation on Istella and Web30k. The reason for such inferior performance is that the teacher model in GenD only uses privileged features (and not regular features). For Istella and Web30k, only using privileged features is not sufficient to generate good predictions. The teachers in GenD are also worse than no-distillation, see Appendix A.4. 4.2 Ablation study on public datasets PFD is not sensitive to ↵. In former experiments, we kept the mixing ratio of teacher loss and data loss to be ↵ = 0.5. Here we evaluate the sensitivity of PFD to parameter ↵. The experiments here use the Yahoo dataset and RankBCE loss. From the lefthand side of Figure 3, we see that PFD delivers good performance over a large range of ↵. However, it is worth noting that the teacher loss is typically much larger than the data loss (e.g., about 20 times larger in this set of experiments), since the teacher’s predictions are much denser learning targets. The right-hand side plot of Figure 3 takes the scale of both losses into consideration. It shows that PFD yields the best performance only when the teacher loss dominates the distillation loss. PFD brings a larger gain when the positive labels are sparse. Recall that we view negative query groups as unlabeled data. Here we evaluate the performance of PFD under different numbers of positive labels. Specifically, by reducing ⌧target from 4.8 to 0.4, we can increase the percentage of positive query groups (i.e., query groups with at least one y = 1). The relative improvement over baseline is shown in Figure 4. While it is known that distillation works better when there are more unlabeled samples, Figure 4 shows that PFD further amplifies such gains: the relative gain of PFD over self-distillation also increases as the positive labels become sparser. Such benefit is especially favorable in recommendation systems, where the positive labels (e.g., click) are naturally very sparse. Correlation between privileged features and target. It is believed that privileged features that are discriminative (e.g., high correlation with the target) lead to accurate teacher predictions, and thus benefit the distillation [XLG+20]. However, we show that PFD has poor performance when the privileged features are too discriminative. Specifically, we modify the experiment setting such that all the features in the datasets are used as regular features, while the privileged features z are generated according to z = I(t · r + G1 > t · ⌧privileged + G0), where G1 and G0 have the same values as in binary label y generation (Equation (2)). By changing ⌧privileged, we can obtain privileged features z with different correlations with the label y. For instance, when ⌧privileged = ⌧target, then z can perfectly predict y (since z = y by definition); and z becomes less discriminative when ⌧privileged gets smaller. Using z as the privileged feature, we have the PFD results in Figure 5. Notice that the privileged feature with the largest correlation with y does not give the best performance. We believe the reason is that as the correlation of z and y increases, the privileged feature becomes so “discriminative” that it can explain almost all the variance in y, even the noise. As a result, teacher predictions have high variance, which leads to high-variance student estimates and inferior testing performance. See Section 5.2 for theoretical insights. 4.3 Evaluation on Amazon’s dataset Dataset overview and ranking model. The dataset is derived from Amazon’s logs which contains query and product title text, the position at which the product was shown, and the user’s behaviors click, add-to-cart, and purchase. The ranking model is a multi-layer transformer that maps query and product title to an estimate of the purchase likelihood. The goal is to rank the products that are more likely to be purchased first. Efficacy of PFD. Here we evaluate the performance of PFD. Notice that the position is a privileged feature as it is not available as an input during online serving (it is the output of the ranking model that becomes position). Further, the click and add-to-cart are naturally privileged features, since one cannot know which of the product will be clicked or added to cart before showing the products. The baseline no-distillation model only takes query and product title as input, while the teacher models in PFD additionally take positions or clicks or add-to-cart as privileged features. As in public datasets, we only use the positive query groups to train the teacher model and use all query groups for distillation. We additionally use pretraining then finetuning as another baseline, as predicting “click” or “add-to-cart” can serve as pretraining tasks. The experiment results are shown in Table 3. Extension: Multi-teacher distillation. Inspired by [FSK+17, ZXHL18], we also evaluate the multiteacher distillation, where the student learns from more than one teacher. We adopt three privileged teachers which take positions, clicks, and add-to-cart as input, respectively. We calculate the loss w.r.t. each of the teachers’ predictions and use the average as “teacher loss” in Equation (1). Intuitively, the student model is trained to learn from an “ensemble” of teacher models. The multi-teacher PFD yields the best performance, an 11.2% improvement on testing NDCG@8 over the baseline model. 5 Theoretical Insights In this section, we present theoretical insights on why and when PFD works via analysis on linear models. While our empirical focus is on ranking problems, our theoretical insights are more general. Consider the following learning problem: the regular feature x 2 Rdx is drawn from a spherical Gaussian distribution N (0, Idx) and an un-observable feature u 2 Rdu is drawn from N (0, Idu). With two unknown parameters w⇤ 2 Rdx and v⇤ 2 Rdu , the label y is generated as following: y = x>w⇤ + u>v⇤ + ✏, ✏ ⇠ N (0, 2), (3) where ✏ represents the label noise. During training time, we observe the features z = u as privileged features. Suppose that the labeled training set Slabel = X 2 Rn⇥dx ,Z 2 Rn⇥dz ,y 2 Rn and the unlabeled set Sunlabel = X(u) 2 Rm⇥dx ,Z(u) 2 Rm⇥dz are generated according to the aforementioned data generation scheme. Let X(a) = [X;X(u)] 2 R(n+m)⇥dx and Z(a) = [Z;Z(u)] 2 R(n+m)⇥dz be the all the inputs from both labeled and unlabeled datasets. The goal is to learn to predict y with only regular feature x as input. 5.1 PFD works by reducing estimation variance Let bwreg denote the model learned by standard linear regression and bwpri be the model learned by privileged features distillation. For simplicity, we consider the case with ↵ = 0, i.e., only learning from the teacher’s prediction during distillation. Specifically, the standard linear regression only uses the set Slabel, and bwreg is obtained by regressing y on X. PFD, on the other hand, first uses Slabel to regress y on [X;Z]. The learned model is then used to generate predictions by for Slabel [ Sunlabel. Finally, by is regressed on X(a), which gives bwpri. We have the following result on the merit of PFD: Theorem 1. For standard linear regression, we have that EX,ykw⇤ bwregk22 = O ✓ dx · ( 2 + kv⇤k2) n ◆ . For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = O ✓ dx · 2 n ◆ +O ✓ dx · kv⇤k2 n+m ◆ +O ✓ 1 n ·m ◆ . Notice that var(y|x) = 2 + kv⇤k22, where 2 corresponds to the label noise and kv⇤k22 corresponds to the variance that can be explained by the privileged features. The result shows that PFD can explain a proper part of the variance in y by privileged features z. By learning from the teacher’s predictions, PFD can therefore reduce the variance of bwpri by exploiting the privileged features and the unlabeled samples. On the other hand, when learning with plain linear regression, the label variance corresponding to z is treated as noise, which leads to estimation with higher variance. Remark 2. Why GenD has worse-than-baseline performance. Notice that the teacher model in GenD uses privileged features only. GenD has inferior performance for two reasons: (1) the privileged features alone are not enough for the teacher model to generate good predictions; and (2) when z is independent of x, the predictions from the GenD’s teacher are not learnable for the student. 5.2 PFD has inferior performance when the privileged features are too discriminative To understand the performance of PFD under different privileged features, consider the setting where z 2 Rdz is the first dz coordinates of u. When dz = du, it recovers the setting in previous subsection. Notice that the larger dz becomes, the better (x; z) can predict y. While one might expect that dz = du (i.e., when the privileged features contain the most information about y) leads to the best distillation performance, our next result shows that such belief is not true in general. Let v⇤z be the part of v⇤ that corresponds to z (i.e., the first dz coordinates of v⇤), we have: Theorem 2. For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = dx · ( 2 + kv⇤k22 kv⇤zk22) n dx dz 1 + dx · kv⇤zk22 n+m dx 1 +O ✓ 1 n ·m ◆ . As we increase dz from 0 to du, kv⇤zk also increases. The teacher, therefore, explains more variance in y and contributes to a smaller error in the student estimate bwpri. However, the denominator of the first term decreases as dz increases, which leads to a higher variance (thus less accurate) student parameter estimate bwpri. Combining the two effects, the privileged features z that contain the most information about y do not yield the best distillation performance. This matches the non-monotone observation in Figure 5; and the results in Table 3 where using add-to-cart (i.e., the most informative feature for predicting purchase) does not give the best PFD result. Example 1. Consider the data generation as shown in Equation (3). We set dx = 10, du = 10, n = 30,m = 200, and draw w⇤ from a spherical Gaussian distribution N (0, Idx). Further, we set = 15, and let v⇤ = [10, 9, · · · , 2, 1]. We evaluate the performance of the standard linear regression and the privileged features distillation with dz from 0 to 10. The results in Figure 6 shows that the most predictive z does not give the best PFD performance. 6 Conclusion In this paper, we take a step toward understanding PFD in learning-to-rank. We first evaluate PFD on three public ranking datasets (Yahoo, Istella, and MSLR-Web30k) and an industrial-scale ranking problem derived from Amazon’s search logs. Our evaluation shows that PFD has the best performance in all evaluated settings. We further conduct comprehensive empirical ablation studies, which demonstrates the efficacy and robustness of PFD and uncovers an interesting non-monotone behavior – as the predictive power of privileged features increase, the performance of PFD first increases and then decreases. Finally, we present theoretical insights for PFD via rigorous analysis for linear models. The theoretical results show that (1) PFD is effective by reducing the variance of student estimation; and (2) a too predictive privileged teacher produces high variance predictions, which lead to high variance (less accurate) student estimates and inferior testing performance.
1. What is the focus and contribution of the paper regarding privileged feature distillation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its empirical evaluation and theoretical analysis? 3. Do you have any concerns or questions about the methodology, such as the choice of feature numbers or the use of multiple teachers? 4. How do you assess the novelty and limitations of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper empirically evaluates distillation approach for privileged features. Teacher is trained with privileged features that are not available during inference, and student then aims to replicate performance of the teacher without these features. Empirical results on public and proprietary datasets show that this approach achieves better performance than baselines. Authors further analyse theoretical properties of this distillation approach in the case of linear models and show that it has desirable properties. Strengths And Weaknesses Strengths Paper is well written and easy to follow. Empirical evaluation is quite thorough and I particularly enjoyed the section on the Amazon dataset although these results are not reproducible. Theoretical results provide some insight into properties of PFD and could be used as a stepping stone for more analysis. Weaknesses I find that the paper has limited novelty. Section 4 is all about empirical evaluation and while it has useful insights the novelty limited. Theoretical analysis in Section 5 is probably the most novel part but it only analyses linear models, and is of limited utility for the complex gradient boosting or deep learning models that are typically used for ranking. Moreover, in most cases it should be possible to use privileged features directly as additional targets. This is cheaper than distillation since it doesn't require training teacher models. Pretrain + finetune results in Table 3 perform similarly to distillation with one teacher, and call into question whether distillation is necessary here at all. I suspect that by carefully tuning weights in a multi-target loss it should be possible to recover multi-teacher performance with just one model. Questions -On line 185 why these specific numbers of features were chosen: 200, 50, and 40? What happens when you use more/less features for each dataset? I would think that this choice has significant effect on performance. -Why "Pretrain on click" and "Pretrain on add" performs better than baseline? These models are not finetuned on purchases, does this mean that clicks and adds are better training targets? -Have you tried to use position+click+add in one teacher? Seems odd to only use one of them when a much better teacher can probably be obtained with all three. Limitations NA
NIPS
Title Toward Understanding Privileged Features Distillation in Learning-to-Rank Abstract In learning-to-rank problems, a privileged feature is one that is available during model training, but not available at test time. Such features naturally arise in merchandised recommendation systems; for instance, “user clicked this item” as a feature is predictive of “user purchased this item” in the offline data, but is clearly not available during online serving. Another source of privileged features is those that are too expensive to compute online but feasible to be added offline. Privileged features distillation (PFD) refers to a natural idea: train a “teacher” model using all features (including privileged ones) and then use it to train a “student” model that does not use the privileged features. In this paper, we first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon’s logs. We show that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self-distillation, and generalized distillation) on all these datasets. Next, we analyze why and when PFD performs well via both empirical ablation studies and theoretical analysis for linear models. Both investigations uncover an interesting non-monotone behavior: as the predictive power of a privileged feature increases, the performance of the resulting student model initially increases but then decreases. We show the reason for the later decreasing performance is that a very predictive privileged teacher produces predictions with high variance, which lead to high variance student estimates and inferior testing performance. 1 Introduction For recommendation systems, the features at test time are typically a subset of features available during training. Those missing features at test time are either too expensive to compute in real-time, or they are post-event features. For instance, for an e-commerce website, “click” is a strong feature for predicting “purchase”, but “click” exists as a feature only in the offline training data, but not during online serving (i.e., one cannot observe “click” before recommendations are generated). Those features that exist only during training are called privileged features. Those that exist during both training and testing are called regular features [XLG+20]. The naive approach is to ignore the privileged features and train a model that only takes regular features. Such methods inevitably miss the information in the privileged features and lead to inferior ⇤This work was done while Shuo Yang was interning at Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). performance. A natural instinct to resolve this is to (a) use the privileged features (either by themselves [LPSBV16] or in conjunction with regular features [XLG+20]) to train a “teacher” model, and then (b) use it to transfer information via distillation2 into a “student” model that only uses the regular features. The approach of a teacher only using privileged features is named generalized distillation (GenD) [LPSBV16], and the approach of a teacher using both privileged and regular features has been referred to as privileged feature distillation (PFD) [XLG+20]. In this paper we provide a detailed investigation – first via empirical ablation studies on moderatescale public and industrial-scale proprietary datasets with deep-learning-to-rank models, and second via rigorous theoretical analysis on simple linear models – into why and when privileged feature distillation works and when it does not. While this paper focuses on learning-to-rank, our results apply to regression/classification problems in general. As a summary, our main contributions are: • We evaluate PFD on three moderate-scale public ranking datasets: Yahoo, Istella, and MSLRWeb30k, and an industrial-scale proprietary dataset derived from Amazon search logs. • In all evaluated settings, PFD is better than or as good as the baselines: no-distillation, GenD (teacher model only uses privileged features), self-distillation (teacher model only uses regular features), and pretraining on privileged features then finetuning (when applicable) (Table 2). • We conduct comprehensive ablation studies for PFD. We find that – PFD is effective as long as the teacher loss dominates the distillation loss and the performance is not sensitive to ↵. Specifically, distillation loss is a linear combination of the loss w.r.t. data and the loss w.r.t. teacher predictions and ↵ is the mixing ratio (Figure 3). – While it is known that the gains from self-distillation (over a no-distillation one-shot training baseline) are larger when the positive labels are sparser, we see that these gains are further amplified by PFD; i.e. the relative gain of PFD over self-distillation also increases as the labels become sparser (Figure 4). – Non-monotonicity in the effectiveness of PFD: as the predictive power of a privileged feature increases, the resulting student performance initially increases but then decreases (Figure 5). • To provide a deeper insight into the landscape of privileged features and distillation, we next rigorously analyze it in a stylized setting involving linear models. We show that – PFD works because the teacher can explain away the variance arising from the privileged features, thus allowing the student to focus on the part it can predict. (Theorem 1). – The reason that GenD is inferior to PFD (as seen in our empirical evaluation) is because it results in a weaker teacher, and also because in the case where the privileged and regular features are independent, the teacher predictions appear as pure noise to the student (who cannot learn from them) (Remark 2). – A very predictive privileged feature induces high variance teacher predictions, which lead to inaccurate student estimates and inferior testing performance. This explains the observation that the most predictive privileged features do not give the best performance (i.e., the nonmonotonicity) in our empirical ablation studies (Theorem 2). The rest of the paper is organized as follows: Section 2 covers related works. Section 3 introduces the problem setup, the PFD algorithm and other algorithms for comparison. Section 4 presents empirical evaluation and ablation studies of PFD; and Section 5 presents theoretical insights. 2 Related Work Privileged features widely exist in different machine learning problems, including speech recognization [MM16], medical imaging [GCA+19], image super-resolution [LLKH20], etc [FA12, FTRS13, FKSH14, ALL17]. Privileged features are not accessible during testing either because they are too expensive to compute in real time, or because they are post-event features (thus cannot be used as input) [CM18]. Learning with privileged features is pioneered in [VV09], where they propose a framework named “learning using privileged information” (LUPI). At the core, LUPI uses privileged information to 2Here, by distillation we mean the standard practice of labeling the training dataset using teacher predictions, and using these as supervision targets in the training of the student model. distinguish between easy and hard examples. The methods are thus closely related to SVM, as the hardness of an example can be expressed by the slack variable. For instance, [VV09, PIVV10] propose the “SVM+” algorithm which generates slack variables from privileged features and learns an SVM based on regular features with those slack variables; [SQL13] proposes a pair-wise SVM algorithm for ranking, which uses privileged features to distinguish easy and hard pairs. [LHS14] presents a variation where the privileged features are used to generate importance weighting for different training samples. Empirically, [SER14] demonstrates that whether LUPI is effective critically depends on experimental settings (e.g., preprocessing, training/validation split, etc). [VI15] considers transferring the kernel function from a teacher SVM that only uses privileged features to a student SVM that only uses regular features; [LDX+20] extends the SVM+ algorithm to imperfect privileged features. Model distillation [HVD+15] is a common method for knowledge transfer, typically from a large model to a smaller one [PPA18, GYMT21]. Recent works have shown great empirical success in ranking problems [TW18, HAS+20, RPM+21] and even the cases where the teacher model and student model have the identical structure [FLT+18, QYT+21]. Using distillation to learn from privileged features are first proposed in [LPSBV16] as “generalized distillation” (GenD). It provides a unified view of LUPI and distillation. GenD, along with the variants [MM16, GMM19, LLKH20], train a teacher model with only privileged features and then train a student model to mimic the teacher’s predictions. PFD is recently proposed in [XLG+20], where the teacher model takes both regular and privileged features as input. PFD and GenD differ from the standard model distillation as they focus on exploiting privileged features but not on reducing the model size. [XLG+20] empirically demonstrates the superior performance of PFD for recommendation systems on a non-public data set. Understanding of privileged features distillation is lacking, despite the aforementioned empirical success. Previously, [PV10] shows that LUPI brings faster convergence under a strong assumption that the best classifier is realizable with only privileged features. [LPSBV16] shows that GenD enjoys a fast convergence rate. It assumes that the teacher model has a much smaller function class complexity than the student model, which does not match with PFD. [GCFY18] studies GenD under semi-supervised learning and shows that the benefits come from student function class complexity reduction. However, it does not quantify such reduction and the theory does not explain what is the benefit of using privileged features. To the best of our knowledge, there is no empirical or theoretical study explaining why PFD is effective. Other ways of utilizing privileged features are also previously proposed. [CJFY17] uses privileged information to learn a more diverse representation to improve image classification performance. [LLKH20, WZW+21] propose distillation schemes for better feature extraction from regular features. A more recent work [CJKB22] considers training a model with both regular and privileged features to obtain a better internal representation of the regular features. 3 Problem Setup and Algorithms Consider a learning-to-rank problem where each query-document pair has features x 2 X and z 2 Z and a label y 2 Y (e.g., click or human-annotated relevance) drawn from an unknown distribution D(y|x, z). Suppose x is the regular feature that is available during both training and testing and z is only available during training. Concretely, privileged feature is defined in the literature as below: Definition 1 (Privileged Feature [CJKB22]). For feature z that exists during training but not testing, we say z is a privileged feature if and only if I(y; z|x) := H(y|x) H(y|x, z) > 0. Conditional mutual information I(y; z|x) and conditional entropy H(·|·) follow from the standard notation of information theory. According to Definition 1, the privileged feature z provides extra predictive power of y. For the rest of this paper, we focus on the setting that z is a privileged feature. Remark 1. An implication of Definition 1 is that the privileged feature z can be independent of the regular feature x. In such cases, any transformation of z is not learnable from x, and therefore using z as auxiliary learning target does not help. Interestingly, PFD can still improve the student performance, even when z and x are independent (see Section 5). We consider the following general learning problem: we are given a labeled training set of size n, Slabel := {(xi, zi, yi)}i2[n], and a unlabeled training set of size m, Sunlabel := {(xi, zi)}i2[m]. Our goal is to generate good ranking based only on regular features x. For clarity of exposition, we only consider pointwise scoring functions F := {f | f : X 7! Y}, which generates a score for each document, and the ranking is induced by sorting the scores. The results in this paper can be easily extended to models beyond pointwise scoring functions (e.g., DASALC [QYZ+21]). The distinction between labeled and unlabeled datasets is for generality. The unlabeled dataset naturally appears in recommendation systems, where the majority of search logs do not contain any user interactions. Instead of taking all such logs as negative samples, it is more proper to view them as unlabeled data due to the lack of user engagement. For the logs that contain click, the documents therein with no click can be treated as negative samples. 3.1 Privileged features distillation PFD first trains a teacher model that takes both x and z as input to predict y, i.e., teacher function class is GPFD := {g | g : X ⇥ Z 7! Y}. For simplicity, we consider pointwise loss l : Y ⇥ Y 7! R in this section, while the method can be easily extended to other loss functions (see an extension to pairwise loss in Section 4). The privileged features distillation takes the following two steps: Step I: Training a teacher model gPFD 2 GPFD by minimizing the loss on the labeled dataset:P (xi,zi,yi)2Slabel l (g(xi, zi), yi). In practice, gradient-based optimizer is used for loss minimization. Step II: Training a student model by distillation. The teacher model gPFD trained from Step I is used to generate pseudo labels on Slabel and Sunlabel. Let Sall denote the union of Slabel and Sunlabel. The student model is trained by minimizing the following distillation loss: ↵ · X (xi,yi)2Slabel l(f(xi), yi) | {z } data loss + (1 ↵) · X (xi,zi)2Sall l (f(xi), gPFD(xi, zi)) | {z } teacher loss , (1) where ↵ 2 (0, 1) controls the mixing ratio between the data loss and teacher loss. The student model is trained by minimizing the distillation loss in Equation (1). 3.2 Other algorithms for comparisons Here we introduce two other algorithms for comparison. See illustration in Figure 1. GenD [LPSBV16] is a distillation method where the teacher model takes only privileged features as input, i.e., the teacher function class is GGenD = {g | g : Z 7! Y}. The teacher model gGenD 2 GGenD is obtained by minimizing P (zi,yi)2Slabel l (g(zi), yi). Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Self-distillation [FLT+18, QYT+21] is a distillation method where the teacher model has the same structure as the student model. Specifically, the teacher model gself-dist. 2 F is obtained by minimizingP (xi,yi)2Slabel l (g(xi), yi). Notice that F is also the student function class. Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Comparing PFD against self-distillation separates the benefits of adopting privileged features and distillation. 4 Experiments 4.1 Main results on public datasets We first evaluate the performance of PFD on three widely used public ranking datasets. Specifically, we use the Set1 from “Yahoo! Learn to rank challenge” [CC11]; “Istella Learning to Rank” dataset [DLN+16]; and Microsoft Learning to Rank “MSLR-Web30k” dataset [QL13]. We refer to them as “Yahoo”, “Istella” and “Web30k” throughout this section. Datasets overview and preprocessing. The training samples in all three datasets can be viewed as query groups, where each query group contains 1 query and multiple documents to be ranked. Each query-document pair is represented as a real-value feature vector (e.g., user dwelling time, tf-idf of document, etc. See [CC11] for detail). Further, each query-document pair has a human-annotated relevance score r 2 {0, 1, 2, 3, 4}. All datasets are preprocessed by removing query groups that contain no positive relevance score or have less than 10 documents. The features are transformed by the log1p transformation as in [ZWBN20, QYZ+21]. Binary label generation. In practice, binary label (e.g., click) is more commonly seen and easier to obtain than relevance score. For our experiments, we generate a binary label y for each querydocument pair based on the human-annotated relevance score r. Specifically: y = I (t · r +G1 > t · ⌧target +G0) , (2) where t is a temperature parameter and G1 and G0 follow the standard Gumbel distribution. It can be shown that y is 1 with probability (t · (r ⌧target)), where (·) is the sigmoid function (see Appendix A.1 for proof). For the rest of our experiments, we set t = 4 and ⌧target = 4.8 unless otherwise mentioned. We refer to the query groups that contain at least one y = 1 to be positive query groups, and other query groups are referred to as negative query groups. Regular and privileged features split. For each of the datasets, we sort the features according to the magnitude of their correlations with the binary label y and use the top 200, 50, and 40 features as privileged features for Yahoo, Istella, and Web30k, respectively. Other features are used as regular features. Please see Table 1 for dataset statistics after preprocessing and binary label generation. Ranking model and performance metric. The ranking model is a 5-layer fully connected neural network, which maps the query-document feature into a real-value score s 2 [0, 1]. The ranking b⇡ of documents is obtained by sorting the scores decreasingly, where b⇡(i) represents the ranked order of the i-th document. The ranking performance is measured by the NDCG@k metric: NDCG@k(b⇡,y) = DCG@k(b⇡,y) DCG@k(⇡⇤,y) , DCG@k(⇡,y) = X ⇡(i)k 2yi 1 log2(1 + ⇡(i)) , where ⇡⇤ is the optimal ranking obtained by sorting yi. PFD is effective for all three datasets. We evaluate the efficacy of PFD on all three aforementioned datasets, under both pointwise (RankBCE) and pairwise (RankNet [BSR+05]) loss functions (see definitions in Appendix A.2). Please see the evaluated algorithms and results in Table 2 (complete results with RankNet loss deferred to Table 4). Figure 2 shows the testing NDCG@8 curve on Yahoo and Web30k with RankBCE loss. Table 2 shows that PFD has the best performance on all evaluated settings. We remark that (1) the only difference between PFD and self-distillation is that the teacher in PFD additionally uses privileged features and therefore has better prediction accuracy than the teacher in self-distillation. Comparing PFD with self-distillation reveals the improvement of using “privileged features” for distillation; (2) the performance of GenD is worse than no-distillation on Istella and Web30k. The reason for such inferior performance is that the teacher model in GenD only uses privileged features (and not regular features). For Istella and Web30k, only using privileged features is not sufficient to generate good predictions. The teachers in GenD are also worse than no-distillation, see Appendix A.4. 4.2 Ablation study on public datasets PFD is not sensitive to ↵. In former experiments, we kept the mixing ratio of teacher loss and data loss to be ↵ = 0.5. Here we evaluate the sensitivity of PFD to parameter ↵. The experiments here use the Yahoo dataset and RankBCE loss. From the lefthand side of Figure 3, we see that PFD delivers good performance over a large range of ↵. However, it is worth noting that the teacher loss is typically much larger than the data loss (e.g., about 20 times larger in this set of experiments), since the teacher’s predictions are much denser learning targets. The right-hand side plot of Figure 3 takes the scale of both losses into consideration. It shows that PFD yields the best performance only when the teacher loss dominates the distillation loss. PFD brings a larger gain when the positive labels are sparse. Recall that we view negative query groups as unlabeled data. Here we evaluate the performance of PFD under different numbers of positive labels. Specifically, by reducing ⌧target from 4.8 to 0.4, we can increase the percentage of positive query groups (i.e., query groups with at least one y = 1). The relative improvement over baseline is shown in Figure 4. While it is known that distillation works better when there are more unlabeled samples, Figure 4 shows that PFD further amplifies such gains: the relative gain of PFD over self-distillation also increases as the positive labels become sparser. Such benefit is especially favorable in recommendation systems, where the positive labels (e.g., click) are naturally very sparse. Correlation between privileged features and target. It is believed that privileged features that are discriminative (e.g., high correlation with the target) lead to accurate teacher predictions, and thus benefit the distillation [XLG+20]. However, we show that PFD has poor performance when the privileged features are too discriminative. Specifically, we modify the experiment setting such that all the features in the datasets are used as regular features, while the privileged features z are generated according to z = I(t · r + G1 > t · ⌧privileged + G0), where G1 and G0 have the same values as in binary label y generation (Equation (2)). By changing ⌧privileged, we can obtain privileged features z with different correlations with the label y. For instance, when ⌧privileged = ⌧target, then z can perfectly predict y (since z = y by definition); and z becomes less discriminative when ⌧privileged gets smaller. Using z as the privileged feature, we have the PFD results in Figure 5. Notice that the privileged feature with the largest correlation with y does not give the best performance. We believe the reason is that as the correlation of z and y increases, the privileged feature becomes so “discriminative” that it can explain almost all the variance in y, even the noise. As a result, teacher predictions have high variance, which leads to high-variance student estimates and inferior testing performance. See Section 5.2 for theoretical insights. 4.3 Evaluation on Amazon’s dataset Dataset overview and ranking model. The dataset is derived from Amazon’s logs which contains query and product title text, the position at which the product was shown, and the user’s behaviors click, add-to-cart, and purchase. The ranking model is a multi-layer transformer that maps query and product title to an estimate of the purchase likelihood. The goal is to rank the products that are more likely to be purchased first. Efficacy of PFD. Here we evaluate the performance of PFD. Notice that the position is a privileged feature as it is not available as an input during online serving (it is the output of the ranking model that becomes position). Further, the click and add-to-cart are naturally privileged features, since one cannot know which of the product will be clicked or added to cart before showing the products. The baseline no-distillation model only takes query and product title as input, while the teacher models in PFD additionally take positions or clicks or add-to-cart as privileged features. As in public datasets, we only use the positive query groups to train the teacher model and use all query groups for distillation. We additionally use pretraining then finetuning as another baseline, as predicting “click” or “add-to-cart” can serve as pretraining tasks. The experiment results are shown in Table 3. Extension: Multi-teacher distillation. Inspired by [FSK+17, ZXHL18], we also evaluate the multiteacher distillation, where the student learns from more than one teacher. We adopt three privileged teachers which take positions, clicks, and add-to-cart as input, respectively. We calculate the loss w.r.t. each of the teachers’ predictions and use the average as “teacher loss” in Equation (1). Intuitively, the student model is trained to learn from an “ensemble” of teacher models. The multi-teacher PFD yields the best performance, an 11.2% improvement on testing NDCG@8 over the baseline model. 5 Theoretical Insights In this section, we present theoretical insights on why and when PFD works via analysis on linear models. While our empirical focus is on ranking problems, our theoretical insights are more general. Consider the following learning problem: the regular feature x 2 Rdx is drawn from a spherical Gaussian distribution N (0, Idx) and an un-observable feature u 2 Rdu is drawn from N (0, Idu). With two unknown parameters w⇤ 2 Rdx and v⇤ 2 Rdu , the label y is generated as following: y = x>w⇤ + u>v⇤ + ✏, ✏ ⇠ N (0, 2), (3) where ✏ represents the label noise. During training time, we observe the features z = u as privileged features. Suppose that the labeled training set Slabel = X 2 Rn⇥dx ,Z 2 Rn⇥dz ,y 2 Rn and the unlabeled set Sunlabel = X(u) 2 Rm⇥dx ,Z(u) 2 Rm⇥dz are generated according to the aforementioned data generation scheme. Let X(a) = [X;X(u)] 2 R(n+m)⇥dx and Z(a) = [Z;Z(u)] 2 R(n+m)⇥dz be the all the inputs from both labeled and unlabeled datasets. The goal is to learn to predict y with only regular feature x as input. 5.1 PFD works by reducing estimation variance Let bwreg denote the model learned by standard linear regression and bwpri be the model learned by privileged features distillation. For simplicity, we consider the case with ↵ = 0, i.e., only learning from the teacher’s prediction during distillation. Specifically, the standard linear regression only uses the set Slabel, and bwreg is obtained by regressing y on X. PFD, on the other hand, first uses Slabel to regress y on [X;Z]. The learned model is then used to generate predictions by for Slabel [ Sunlabel. Finally, by is regressed on X(a), which gives bwpri. We have the following result on the merit of PFD: Theorem 1. For standard linear regression, we have that EX,ykw⇤ bwregk22 = O ✓ dx · ( 2 + kv⇤k2) n ◆ . For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = O ✓ dx · 2 n ◆ +O ✓ dx · kv⇤k2 n+m ◆ +O ✓ 1 n ·m ◆ . Notice that var(y|x) = 2 + kv⇤k22, where 2 corresponds to the label noise and kv⇤k22 corresponds to the variance that can be explained by the privileged features. The result shows that PFD can explain a proper part of the variance in y by privileged features z. By learning from the teacher’s predictions, PFD can therefore reduce the variance of bwpri by exploiting the privileged features and the unlabeled samples. On the other hand, when learning with plain linear regression, the label variance corresponding to z is treated as noise, which leads to estimation with higher variance. Remark 2. Why GenD has worse-than-baseline performance. Notice that the teacher model in GenD uses privileged features only. GenD has inferior performance for two reasons: (1) the privileged features alone are not enough for the teacher model to generate good predictions; and (2) when z is independent of x, the predictions from the GenD’s teacher are not learnable for the student. 5.2 PFD has inferior performance when the privileged features are too discriminative To understand the performance of PFD under different privileged features, consider the setting where z 2 Rdz is the first dz coordinates of u. When dz = du, it recovers the setting in previous subsection. Notice that the larger dz becomes, the better (x; z) can predict y. While one might expect that dz = du (i.e., when the privileged features contain the most information about y) leads to the best distillation performance, our next result shows that such belief is not true in general. Let v⇤z be the part of v⇤ that corresponds to z (i.e., the first dz coordinates of v⇤), we have: Theorem 2. For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = dx · ( 2 + kv⇤k22 kv⇤zk22) n dx dz 1 + dx · kv⇤zk22 n+m dx 1 +O ✓ 1 n ·m ◆ . As we increase dz from 0 to du, kv⇤zk also increases. The teacher, therefore, explains more variance in y and contributes to a smaller error in the student estimate bwpri. However, the denominator of the first term decreases as dz increases, which leads to a higher variance (thus less accurate) student parameter estimate bwpri. Combining the two effects, the privileged features z that contain the most information about y do not yield the best distillation performance. This matches the non-monotone observation in Figure 5; and the results in Table 3 where using add-to-cart (i.e., the most informative feature for predicting purchase) does not give the best PFD result. Example 1. Consider the data generation as shown in Equation (3). We set dx = 10, du = 10, n = 30,m = 200, and draw w⇤ from a spherical Gaussian distribution N (0, Idx). Further, we set = 15, and let v⇤ = [10, 9, · · · , 2, 1]. We evaluate the performance of the standard linear regression and the privileged features distillation with dz from 0 to 10. The results in Figure 6 shows that the most predictive z does not give the best PFD performance. 6 Conclusion In this paper, we take a step toward understanding PFD in learning-to-rank. We first evaluate PFD on three public ranking datasets (Yahoo, Istella, and MSLR-Web30k) and an industrial-scale ranking problem derived from Amazon’s search logs. Our evaluation shows that PFD has the best performance in all evaluated settings. We further conduct comprehensive empirical ablation studies, which demonstrates the efficacy and robustness of PFD and uncovers an interesting non-monotone behavior – as the predictive power of privileged features increase, the performance of PFD first increases and then decreases. Finally, we present theoretical insights for PFD via rigorous analysis for linear models. The theoretical results show that (1) PFD is effective by reducing the variance of student estimation; and (2) a too predictive privileged teacher produces high variance predictions, which lead to high variance (less accurate) student estimates and inferior testing performance.
1. What is the focus and contribution of the paper regarding Privileged Features Distillation? 2. What are the strengths of the proposed method, particularly in its practical applicability and theoretical understanding? 3. What are the weaknesses of the paper, especially regarding the figure layouts and notation explanations? 4. Do you have any concerns or questions about the PFD approach, such as the performance of students compared to teachers? 5. How do the authors address the limitation of selecting the most correlated features for privileged features?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this work, the authors studied about the Privileged Features Distillation (PFD), where there are indicative features available in training but missing in serving, so as to leverage these privileged features, distillation of a teacher model trained with privileged features is deployed. In specific, in the PFD proposed in the work, the teacher model leverages both privileged features and regular features available in serving. The proposed setting is shown to be better than all baselines on the public and industrial datasets. The authors also provided emprical explaination on why and when PFD works by ablation study and theory on linear models. The main contributions mainly include a practically applicable method PFD and the theoretical understanding of the method. Strengths And Weaknesses Strengths The paper is overall well-written, with all main contributions listed and sufficient ablation study. First work gives reasonable understanding of why using privileged feature distillation works. Weaknesses Figure layouts look a bit wierd. Authors may have used some special template. Usually, a figure will take a full column instead of just a floating panel. Some notation might be better explained in the main text, for example, RankBCE is short for binary cross entropy loss. It's not very interpretable without checking the appendix. Questions One puzzle to me is in the Appendix Table 4, I find some PFD students perform as good as, sometimes even better than their PFD teachers (on Istella and Web30k). How could the students without the most explainable privileged features outperform the teachers with both regular and privileged features? Because the labelled dataset is much smaller? It would be helpful if the authors could explain this. How are the correlation of selected privileged features? In the synthetic data, the authors picked the most correlated features from each public dataset as the privileged features. On the other hand, the authors showed that there is a non-monotonic dependence of student model performance and the correlation of the privileged features. It sounds like selecting the most correlated features are not optimal except they can show the correlations of the most correlated features are arround the range of the optimal performance. Limitations N.A.
NIPS
Title Toward Understanding Privileged Features Distillation in Learning-to-Rank Abstract In learning-to-rank problems, a privileged feature is one that is available during model training, but not available at test time. Such features naturally arise in merchandised recommendation systems; for instance, “user clicked this item” as a feature is predictive of “user purchased this item” in the offline data, but is clearly not available during online serving. Another source of privileged features is those that are too expensive to compute online but feasible to be added offline. Privileged features distillation (PFD) refers to a natural idea: train a “teacher” model using all features (including privileged ones) and then use it to train a “student” model that does not use the privileged features. In this paper, we first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon’s logs. We show that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self-distillation, and generalized distillation) on all these datasets. Next, we analyze why and when PFD performs well via both empirical ablation studies and theoretical analysis for linear models. Both investigations uncover an interesting non-monotone behavior: as the predictive power of a privileged feature increases, the performance of the resulting student model initially increases but then decreases. We show the reason for the later decreasing performance is that a very predictive privileged teacher produces predictions with high variance, which lead to high variance student estimates and inferior testing performance. 1 Introduction For recommendation systems, the features at test time are typically a subset of features available during training. Those missing features at test time are either too expensive to compute in real-time, or they are post-event features. For instance, for an e-commerce website, “click” is a strong feature for predicting “purchase”, but “click” exists as a feature only in the offline training data, but not during online serving (i.e., one cannot observe “click” before recommendations are generated). Those features that exist only during training are called privileged features. Those that exist during both training and testing are called regular features [XLG+20]. The naive approach is to ignore the privileged features and train a model that only takes regular features. Such methods inevitably miss the information in the privileged features and lead to inferior ⇤This work was done while Shuo Yang was interning at Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). performance. A natural instinct to resolve this is to (a) use the privileged features (either by themselves [LPSBV16] or in conjunction with regular features [XLG+20]) to train a “teacher” model, and then (b) use it to transfer information via distillation2 into a “student” model that only uses the regular features. The approach of a teacher only using privileged features is named generalized distillation (GenD) [LPSBV16], and the approach of a teacher using both privileged and regular features has been referred to as privileged feature distillation (PFD) [XLG+20]. In this paper we provide a detailed investigation – first via empirical ablation studies on moderatescale public and industrial-scale proprietary datasets with deep-learning-to-rank models, and second via rigorous theoretical analysis on simple linear models – into why and when privileged feature distillation works and when it does not. While this paper focuses on learning-to-rank, our results apply to regression/classification problems in general. As a summary, our main contributions are: • We evaluate PFD on three moderate-scale public ranking datasets: Yahoo, Istella, and MSLRWeb30k, and an industrial-scale proprietary dataset derived from Amazon search logs. • In all evaluated settings, PFD is better than or as good as the baselines: no-distillation, GenD (teacher model only uses privileged features), self-distillation (teacher model only uses regular features), and pretraining on privileged features then finetuning (when applicable) (Table 2). • We conduct comprehensive ablation studies for PFD. We find that – PFD is effective as long as the teacher loss dominates the distillation loss and the performance is not sensitive to ↵. Specifically, distillation loss is a linear combination of the loss w.r.t. data and the loss w.r.t. teacher predictions and ↵ is the mixing ratio (Figure 3). – While it is known that the gains from self-distillation (over a no-distillation one-shot training baseline) are larger when the positive labels are sparser, we see that these gains are further amplified by PFD; i.e. the relative gain of PFD over self-distillation also increases as the labels become sparser (Figure 4). – Non-monotonicity in the effectiveness of PFD: as the predictive power of a privileged feature increases, the resulting student performance initially increases but then decreases (Figure 5). • To provide a deeper insight into the landscape of privileged features and distillation, we next rigorously analyze it in a stylized setting involving linear models. We show that – PFD works because the teacher can explain away the variance arising from the privileged features, thus allowing the student to focus on the part it can predict. (Theorem 1). – The reason that GenD is inferior to PFD (as seen in our empirical evaluation) is because it results in a weaker teacher, and also because in the case where the privileged and regular features are independent, the teacher predictions appear as pure noise to the student (who cannot learn from them) (Remark 2). – A very predictive privileged feature induces high variance teacher predictions, which lead to inaccurate student estimates and inferior testing performance. This explains the observation that the most predictive privileged features do not give the best performance (i.e., the nonmonotonicity) in our empirical ablation studies (Theorem 2). The rest of the paper is organized as follows: Section 2 covers related works. Section 3 introduces the problem setup, the PFD algorithm and other algorithms for comparison. Section 4 presents empirical evaluation and ablation studies of PFD; and Section 5 presents theoretical insights. 2 Related Work Privileged features widely exist in different machine learning problems, including speech recognization [MM16], medical imaging [GCA+19], image super-resolution [LLKH20], etc [FA12, FTRS13, FKSH14, ALL17]. Privileged features are not accessible during testing either because they are too expensive to compute in real time, or because they are post-event features (thus cannot be used as input) [CM18]. Learning with privileged features is pioneered in [VV09], where they propose a framework named “learning using privileged information” (LUPI). At the core, LUPI uses privileged information to 2Here, by distillation we mean the standard practice of labeling the training dataset using teacher predictions, and using these as supervision targets in the training of the student model. distinguish between easy and hard examples. The methods are thus closely related to SVM, as the hardness of an example can be expressed by the slack variable. For instance, [VV09, PIVV10] propose the “SVM+” algorithm which generates slack variables from privileged features and learns an SVM based on regular features with those slack variables; [SQL13] proposes a pair-wise SVM algorithm for ranking, which uses privileged features to distinguish easy and hard pairs. [LHS14] presents a variation where the privileged features are used to generate importance weighting for different training samples. Empirically, [SER14] demonstrates that whether LUPI is effective critically depends on experimental settings (e.g., preprocessing, training/validation split, etc). [VI15] considers transferring the kernel function from a teacher SVM that only uses privileged features to a student SVM that only uses regular features; [LDX+20] extends the SVM+ algorithm to imperfect privileged features. Model distillation [HVD+15] is a common method for knowledge transfer, typically from a large model to a smaller one [PPA18, GYMT21]. Recent works have shown great empirical success in ranking problems [TW18, HAS+20, RPM+21] and even the cases where the teacher model and student model have the identical structure [FLT+18, QYT+21]. Using distillation to learn from privileged features are first proposed in [LPSBV16] as “generalized distillation” (GenD). It provides a unified view of LUPI and distillation. GenD, along with the variants [MM16, GMM19, LLKH20], train a teacher model with only privileged features and then train a student model to mimic the teacher’s predictions. PFD is recently proposed in [XLG+20], where the teacher model takes both regular and privileged features as input. PFD and GenD differ from the standard model distillation as they focus on exploiting privileged features but not on reducing the model size. [XLG+20] empirically demonstrates the superior performance of PFD for recommendation systems on a non-public data set. Understanding of privileged features distillation is lacking, despite the aforementioned empirical success. Previously, [PV10] shows that LUPI brings faster convergence under a strong assumption that the best classifier is realizable with only privileged features. [LPSBV16] shows that GenD enjoys a fast convergence rate. It assumes that the teacher model has a much smaller function class complexity than the student model, which does not match with PFD. [GCFY18] studies GenD under semi-supervised learning and shows that the benefits come from student function class complexity reduction. However, it does not quantify such reduction and the theory does not explain what is the benefit of using privileged features. To the best of our knowledge, there is no empirical or theoretical study explaining why PFD is effective. Other ways of utilizing privileged features are also previously proposed. [CJFY17] uses privileged information to learn a more diverse representation to improve image classification performance. [LLKH20, WZW+21] propose distillation schemes for better feature extraction from regular features. A more recent work [CJKB22] considers training a model with both regular and privileged features to obtain a better internal representation of the regular features. 3 Problem Setup and Algorithms Consider a learning-to-rank problem where each query-document pair has features x 2 X and z 2 Z and a label y 2 Y (e.g., click or human-annotated relevance) drawn from an unknown distribution D(y|x, z). Suppose x is the regular feature that is available during both training and testing and z is only available during training. Concretely, privileged feature is defined in the literature as below: Definition 1 (Privileged Feature [CJKB22]). For feature z that exists during training but not testing, we say z is a privileged feature if and only if I(y; z|x) := H(y|x) H(y|x, z) > 0. Conditional mutual information I(y; z|x) and conditional entropy H(·|·) follow from the standard notation of information theory. According to Definition 1, the privileged feature z provides extra predictive power of y. For the rest of this paper, we focus on the setting that z is a privileged feature. Remark 1. An implication of Definition 1 is that the privileged feature z can be independent of the regular feature x. In such cases, any transformation of z is not learnable from x, and therefore using z as auxiliary learning target does not help. Interestingly, PFD can still improve the student performance, even when z and x are independent (see Section 5). We consider the following general learning problem: we are given a labeled training set of size n, Slabel := {(xi, zi, yi)}i2[n], and a unlabeled training set of size m, Sunlabel := {(xi, zi)}i2[m]. Our goal is to generate good ranking based only on regular features x. For clarity of exposition, we only consider pointwise scoring functions F := {f | f : X 7! Y}, which generates a score for each document, and the ranking is induced by sorting the scores. The results in this paper can be easily extended to models beyond pointwise scoring functions (e.g., DASALC [QYZ+21]). The distinction between labeled and unlabeled datasets is for generality. The unlabeled dataset naturally appears in recommendation systems, where the majority of search logs do not contain any user interactions. Instead of taking all such logs as negative samples, it is more proper to view them as unlabeled data due to the lack of user engagement. For the logs that contain click, the documents therein with no click can be treated as negative samples. 3.1 Privileged features distillation PFD first trains a teacher model that takes both x and z as input to predict y, i.e., teacher function class is GPFD := {g | g : X ⇥ Z 7! Y}. For simplicity, we consider pointwise loss l : Y ⇥ Y 7! R in this section, while the method can be easily extended to other loss functions (see an extension to pairwise loss in Section 4). The privileged features distillation takes the following two steps: Step I: Training a teacher model gPFD 2 GPFD by minimizing the loss on the labeled dataset:P (xi,zi,yi)2Slabel l (g(xi, zi), yi). In practice, gradient-based optimizer is used for loss minimization. Step II: Training a student model by distillation. The teacher model gPFD trained from Step I is used to generate pseudo labels on Slabel and Sunlabel. Let Sall denote the union of Slabel and Sunlabel. The student model is trained by minimizing the following distillation loss: ↵ · X (xi,yi)2Slabel l(f(xi), yi) | {z } data loss + (1 ↵) · X (xi,zi)2Sall l (f(xi), gPFD(xi, zi)) | {z } teacher loss , (1) where ↵ 2 (0, 1) controls the mixing ratio between the data loss and teacher loss. The student model is trained by minimizing the distillation loss in Equation (1). 3.2 Other algorithms for comparisons Here we introduce two other algorithms for comparison. See illustration in Figure 1. GenD [LPSBV16] is a distillation method where the teacher model takes only privileged features as input, i.e., the teacher function class is GGenD = {g | g : Z 7! Y}. The teacher model gGenD 2 GGenD is obtained by minimizing P (zi,yi)2Slabel l (g(zi), yi). Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Self-distillation [FLT+18, QYT+21] is a distillation method where the teacher model has the same structure as the student model. Specifically, the teacher model gself-dist. 2 F is obtained by minimizingP (xi,yi)2Slabel l (g(xi), yi). Notice that F is also the student function class. Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Comparing PFD against self-distillation separates the benefits of adopting privileged features and distillation. 4 Experiments 4.1 Main results on public datasets We first evaluate the performance of PFD on three widely used public ranking datasets. Specifically, we use the Set1 from “Yahoo! Learn to rank challenge” [CC11]; “Istella Learning to Rank” dataset [DLN+16]; and Microsoft Learning to Rank “MSLR-Web30k” dataset [QL13]. We refer to them as “Yahoo”, “Istella” and “Web30k” throughout this section. Datasets overview and preprocessing. The training samples in all three datasets can be viewed as query groups, where each query group contains 1 query and multiple documents to be ranked. Each query-document pair is represented as a real-value feature vector (e.g., user dwelling time, tf-idf of document, etc. See [CC11] for detail). Further, each query-document pair has a human-annotated relevance score r 2 {0, 1, 2, 3, 4}. All datasets are preprocessed by removing query groups that contain no positive relevance score or have less than 10 documents. The features are transformed by the log1p transformation as in [ZWBN20, QYZ+21]. Binary label generation. In practice, binary label (e.g., click) is more commonly seen and easier to obtain than relevance score. For our experiments, we generate a binary label y for each querydocument pair based on the human-annotated relevance score r. Specifically: y = I (t · r +G1 > t · ⌧target +G0) , (2) where t is a temperature parameter and G1 and G0 follow the standard Gumbel distribution. It can be shown that y is 1 with probability (t · (r ⌧target)), where (·) is the sigmoid function (see Appendix A.1 for proof). For the rest of our experiments, we set t = 4 and ⌧target = 4.8 unless otherwise mentioned. We refer to the query groups that contain at least one y = 1 to be positive query groups, and other query groups are referred to as negative query groups. Regular and privileged features split. For each of the datasets, we sort the features according to the magnitude of their correlations with the binary label y and use the top 200, 50, and 40 features as privileged features for Yahoo, Istella, and Web30k, respectively. Other features are used as regular features. Please see Table 1 for dataset statistics after preprocessing and binary label generation. Ranking model and performance metric. The ranking model is a 5-layer fully connected neural network, which maps the query-document feature into a real-value score s 2 [0, 1]. The ranking b⇡ of documents is obtained by sorting the scores decreasingly, where b⇡(i) represents the ranked order of the i-th document. The ranking performance is measured by the NDCG@k metric: NDCG@k(b⇡,y) = DCG@k(b⇡,y) DCG@k(⇡⇤,y) , DCG@k(⇡,y) = X ⇡(i)k 2yi 1 log2(1 + ⇡(i)) , where ⇡⇤ is the optimal ranking obtained by sorting yi. PFD is effective for all three datasets. We evaluate the efficacy of PFD on all three aforementioned datasets, under both pointwise (RankBCE) and pairwise (RankNet [BSR+05]) loss functions (see definitions in Appendix A.2). Please see the evaluated algorithms and results in Table 2 (complete results with RankNet loss deferred to Table 4). Figure 2 shows the testing NDCG@8 curve on Yahoo and Web30k with RankBCE loss. Table 2 shows that PFD has the best performance on all evaluated settings. We remark that (1) the only difference between PFD and self-distillation is that the teacher in PFD additionally uses privileged features and therefore has better prediction accuracy than the teacher in self-distillation. Comparing PFD with self-distillation reveals the improvement of using “privileged features” for distillation; (2) the performance of GenD is worse than no-distillation on Istella and Web30k. The reason for such inferior performance is that the teacher model in GenD only uses privileged features (and not regular features). For Istella and Web30k, only using privileged features is not sufficient to generate good predictions. The teachers in GenD are also worse than no-distillation, see Appendix A.4. 4.2 Ablation study on public datasets PFD is not sensitive to ↵. In former experiments, we kept the mixing ratio of teacher loss and data loss to be ↵ = 0.5. Here we evaluate the sensitivity of PFD to parameter ↵. The experiments here use the Yahoo dataset and RankBCE loss. From the lefthand side of Figure 3, we see that PFD delivers good performance over a large range of ↵. However, it is worth noting that the teacher loss is typically much larger than the data loss (e.g., about 20 times larger in this set of experiments), since the teacher’s predictions are much denser learning targets. The right-hand side plot of Figure 3 takes the scale of both losses into consideration. It shows that PFD yields the best performance only when the teacher loss dominates the distillation loss. PFD brings a larger gain when the positive labels are sparse. Recall that we view negative query groups as unlabeled data. Here we evaluate the performance of PFD under different numbers of positive labels. Specifically, by reducing ⌧target from 4.8 to 0.4, we can increase the percentage of positive query groups (i.e., query groups with at least one y = 1). The relative improvement over baseline is shown in Figure 4. While it is known that distillation works better when there are more unlabeled samples, Figure 4 shows that PFD further amplifies such gains: the relative gain of PFD over self-distillation also increases as the positive labels become sparser. Such benefit is especially favorable in recommendation systems, where the positive labels (e.g., click) are naturally very sparse. Correlation between privileged features and target. It is believed that privileged features that are discriminative (e.g., high correlation with the target) lead to accurate teacher predictions, and thus benefit the distillation [XLG+20]. However, we show that PFD has poor performance when the privileged features are too discriminative. Specifically, we modify the experiment setting such that all the features in the datasets are used as regular features, while the privileged features z are generated according to z = I(t · r + G1 > t · ⌧privileged + G0), where G1 and G0 have the same values as in binary label y generation (Equation (2)). By changing ⌧privileged, we can obtain privileged features z with different correlations with the label y. For instance, when ⌧privileged = ⌧target, then z can perfectly predict y (since z = y by definition); and z becomes less discriminative when ⌧privileged gets smaller. Using z as the privileged feature, we have the PFD results in Figure 5. Notice that the privileged feature with the largest correlation with y does not give the best performance. We believe the reason is that as the correlation of z and y increases, the privileged feature becomes so “discriminative” that it can explain almost all the variance in y, even the noise. As a result, teacher predictions have high variance, which leads to high-variance student estimates and inferior testing performance. See Section 5.2 for theoretical insights. 4.3 Evaluation on Amazon’s dataset Dataset overview and ranking model. The dataset is derived from Amazon’s logs which contains query and product title text, the position at which the product was shown, and the user’s behaviors click, add-to-cart, and purchase. The ranking model is a multi-layer transformer that maps query and product title to an estimate of the purchase likelihood. The goal is to rank the products that are more likely to be purchased first. Efficacy of PFD. Here we evaluate the performance of PFD. Notice that the position is a privileged feature as it is not available as an input during online serving (it is the output of the ranking model that becomes position). Further, the click and add-to-cart are naturally privileged features, since one cannot know which of the product will be clicked or added to cart before showing the products. The baseline no-distillation model only takes query and product title as input, while the teacher models in PFD additionally take positions or clicks or add-to-cart as privileged features. As in public datasets, we only use the positive query groups to train the teacher model and use all query groups for distillation. We additionally use pretraining then finetuning as another baseline, as predicting “click” or “add-to-cart” can serve as pretraining tasks. The experiment results are shown in Table 3. Extension: Multi-teacher distillation. Inspired by [FSK+17, ZXHL18], we also evaluate the multiteacher distillation, where the student learns from more than one teacher. We adopt three privileged teachers which take positions, clicks, and add-to-cart as input, respectively. We calculate the loss w.r.t. each of the teachers’ predictions and use the average as “teacher loss” in Equation (1). Intuitively, the student model is trained to learn from an “ensemble” of teacher models. The multi-teacher PFD yields the best performance, an 11.2% improvement on testing NDCG@8 over the baseline model. 5 Theoretical Insights In this section, we present theoretical insights on why and when PFD works via analysis on linear models. While our empirical focus is on ranking problems, our theoretical insights are more general. Consider the following learning problem: the regular feature x 2 Rdx is drawn from a spherical Gaussian distribution N (0, Idx) and an un-observable feature u 2 Rdu is drawn from N (0, Idu). With two unknown parameters w⇤ 2 Rdx and v⇤ 2 Rdu , the label y is generated as following: y = x>w⇤ + u>v⇤ + ✏, ✏ ⇠ N (0, 2), (3) where ✏ represents the label noise. During training time, we observe the features z = u as privileged features. Suppose that the labeled training set Slabel = X 2 Rn⇥dx ,Z 2 Rn⇥dz ,y 2 Rn and the unlabeled set Sunlabel = X(u) 2 Rm⇥dx ,Z(u) 2 Rm⇥dz are generated according to the aforementioned data generation scheme. Let X(a) = [X;X(u)] 2 R(n+m)⇥dx and Z(a) = [Z;Z(u)] 2 R(n+m)⇥dz be the all the inputs from both labeled and unlabeled datasets. The goal is to learn to predict y with only regular feature x as input. 5.1 PFD works by reducing estimation variance Let bwreg denote the model learned by standard linear regression and bwpri be the model learned by privileged features distillation. For simplicity, we consider the case with ↵ = 0, i.e., only learning from the teacher’s prediction during distillation. Specifically, the standard linear regression only uses the set Slabel, and bwreg is obtained by regressing y on X. PFD, on the other hand, first uses Slabel to regress y on [X;Z]. The learned model is then used to generate predictions by for Slabel [ Sunlabel. Finally, by is regressed on X(a), which gives bwpri. We have the following result on the merit of PFD: Theorem 1. For standard linear regression, we have that EX,ykw⇤ bwregk22 = O ✓ dx · ( 2 + kv⇤k2) n ◆ . For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = O ✓ dx · 2 n ◆ +O ✓ dx · kv⇤k2 n+m ◆ +O ✓ 1 n ·m ◆ . Notice that var(y|x) = 2 + kv⇤k22, where 2 corresponds to the label noise and kv⇤k22 corresponds to the variance that can be explained by the privileged features. The result shows that PFD can explain a proper part of the variance in y by privileged features z. By learning from the teacher’s predictions, PFD can therefore reduce the variance of bwpri by exploiting the privileged features and the unlabeled samples. On the other hand, when learning with plain linear regression, the label variance corresponding to z is treated as noise, which leads to estimation with higher variance. Remark 2. Why GenD has worse-than-baseline performance. Notice that the teacher model in GenD uses privileged features only. GenD has inferior performance for two reasons: (1) the privileged features alone are not enough for the teacher model to generate good predictions; and (2) when z is independent of x, the predictions from the GenD’s teacher are not learnable for the student. 5.2 PFD has inferior performance when the privileged features are too discriminative To understand the performance of PFD under different privileged features, consider the setting where z 2 Rdz is the first dz coordinates of u. When dz = du, it recovers the setting in previous subsection. Notice that the larger dz becomes, the better (x; z) can predict y. While one might expect that dz = du (i.e., when the privileged features contain the most information about y) leads to the best distillation performance, our next result shows that such belief is not true in general. Let v⇤z be the part of v⇤ that corresponds to z (i.e., the first dz coordinates of v⇤), we have: Theorem 2. For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = dx · ( 2 + kv⇤k22 kv⇤zk22) n dx dz 1 + dx · kv⇤zk22 n+m dx 1 +O ✓ 1 n ·m ◆ . As we increase dz from 0 to du, kv⇤zk also increases. The teacher, therefore, explains more variance in y and contributes to a smaller error in the student estimate bwpri. However, the denominator of the first term decreases as dz increases, which leads to a higher variance (thus less accurate) student parameter estimate bwpri. Combining the two effects, the privileged features z that contain the most information about y do not yield the best distillation performance. This matches the non-monotone observation in Figure 5; and the results in Table 3 where using add-to-cart (i.e., the most informative feature for predicting purchase) does not give the best PFD result. Example 1. Consider the data generation as shown in Equation (3). We set dx = 10, du = 10, n = 30,m = 200, and draw w⇤ from a spherical Gaussian distribution N (0, Idx). Further, we set = 15, and let v⇤ = [10, 9, · · · , 2, 1]. We evaluate the performance of the standard linear regression and the privileged features distillation with dz from 0 to 10. The results in Figure 6 shows that the most predictive z does not give the best PFD performance. 6 Conclusion In this paper, we take a step toward understanding PFD in learning-to-rank. We first evaluate PFD on three public ranking datasets (Yahoo, Istella, and MSLR-Web30k) and an industrial-scale ranking problem derived from Amazon’s search logs. Our evaluation shows that PFD has the best performance in all evaluated settings. We further conduct comprehensive empirical ablation studies, which demonstrates the efficacy and robustness of PFD and uncovers an interesting non-monotone behavior – as the predictive power of privileged features increase, the performance of PFD first increases and then decreases. Finally, we present theoretical insights for PFD via rigorous analysis for linear models. The theoretical results show that (1) PFD is effective by reducing the variance of student estimation; and (2) a too predictive privileged teacher produces high variance predictions, which lead to high variance (less accurate) student estimates and inferior testing performance.
1. What is the focus and contribution of the paper regarding Privileged Features Distillation (PFD) for Learning-to-Rank problems? 2. What are the strengths of the proposed approach, particularly in terms of its ability to transfer information from a "teacher" model to a "student" model via distillation? 3. What are the weaknesses of the paper, especially regarding typos and minor issues with clarity? 4. Do you have any questions regarding the experimental setup or results, such as how the temperature-based schema is defined or how the sensitivity trend to alpha varies across different datasets? 5. Are there any limitations or potential negative societal impacts associated with the work, and if so, how might they be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors provide an empirical study of Privileged Features Distillation (PFD) for Learning-to-Rank problems (LTR), applied on 3 public datasets and one private industrial dataset (Amazon search logs). The principle of PFD is based on two models: 1) one, which learns with all the features available (including the privileged ones) and will play the role of a « teacher » to a 2) second model, the « student » model which is trained using only the regular features and into which teacher information is transferred via distillation. PFD is compared against 4 other baselines: no distillation (training only on regular features, no teacher, only one model) pre-training on privileged features followed by fine-tuning with only regular features self-distillation (the teacher model is trained only on non-privileged features) and generalized distillation (GenD, the teacher model is trained only on privileged features) Experiments show PFD performs better or as good as the baselines. An ablation study and theoretical analysis focused on linear models finally help to understand when and why PFD works. Strengths And Weaknesses Strengths: S1 (clarity, quality): The paper is well-written and its clarity makes it easy to follow and enjoyable to read. S2 (significance): This paper is a first attempt to bridge the gap of lack of performance understanding of PFD. The extensive experiments and theoretical analysis conducted in this paper help to better understand PFD and support some intuitions (ex: PFD cannot do miracles if the most discriminative features for the task at hand are privileged - this shows the superiority of PFD over GenD) and less intuitive results (teacher loss should dominate distillation loss, PFD works better with sparser labels, PFD reduces estimation variance). Supplemental work on the use of a teacher model with imputed privileged features during inference is also interesting. Weaknesses: W1 (clarity, minor): This is minor but as the concept of privileged feature is not limited to learning-to-rank problems, the first sentence of the abstract can be misleading. Typos: -row 54: « PDF » -row 178: indicator function mentioned for the first time, always helpful for the reader to name it (and define it)! Questions Q1: Equation 2: First comment: By reading the sequel we understand why we need the temperature-based schema to transform human-annotated relevance scores into binary relevance (and not a simple threshold rule): the objective is to have a non-trivial artificial relationship between label and privileged features for the purpose of some demonstration later but this could be clarified before. How \tau target is defined? Q2: Regarding the sensitivity trend to \alpha, do we observe the same for other datasets? Q3: Evaluation on Amazon dataset. How is handled the feature of « product title » in the model? What does the « query » feature look like? Q4: Table 3: multi-teacher distillation requires 3 teacher models, each of them trained on all regular features + one privileged feature at a time. What about one single teacher (and one teacher loss) with all the privileged features at the same time? What is the corresponding performance? Limitations Yes, limitations are enunciated. This is even the purpose of the paper to determine in which cases PFD performs or does not perform. Potential negative societal impact of the work is not mentioned as this is more a generic LTR problem.
NIPS
Title Toward Understanding Privileged Features Distillation in Learning-to-Rank Abstract In learning-to-rank problems, a privileged feature is one that is available during model training, but not available at test time. Such features naturally arise in merchandised recommendation systems; for instance, “user clicked this item” as a feature is predictive of “user purchased this item” in the offline data, but is clearly not available during online serving. Another source of privileged features is those that are too expensive to compute online but feasible to be added offline. Privileged features distillation (PFD) refers to a natural idea: train a “teacher” model using all features (including privileged ones) and then use it to train a “student” model that does not use the privileged features. In this paper, we first study PFD empirically on three public ranking datasets and an industrial-scale ranking problem derived from Amazon’s logs. We show that PFD outperforms several baselines (no-distillation, pretraining-finetuning, self-distillation, and generalized distillation) on all these datasets. Next, we analyze why and when PFD performs well via both empirical ablation studies and theoretical analysis for linear models. Both investigations uncover an interesting non-monotone behavior: as the predictive power of a privileged feature increases, the performance of the resulting student model initially increases but then decreases. We show the reason for the later decreasing performance is that a very predictive privileged teacher produces predictions with high variance, which lead to high variance student estimates and inferior testing performance. 1 Introduction For recommendation systems, the features at test time are typically a subset of features available during training. Those missing features at test time are either too expensive to compute in real-time, or they are post-event features. For instance, for an e-commerce website, “click” is a strong feature for predicting “purchase”, but “click” exists as a feature only in the offline training data, but not during online serving (i.e., one cannot observe “click” before recommendations are generated). Those features that exist only during training are called privileged features. Those that exist during both training and testing are called regular features [XLG+20]. The naive approach is to ignore the privileged features and train a model that only takes regular features. Such methods inevitably miss the information in the privileged features and lead to inferior ⇤This work was done while Shuo Yang was interning at Amazon. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). performance. A natural instinct to resolve this is to (a) use the privileged features (either by themselves [LPSBV16] or in conjunction with regular features [XLG+20]) to train a “teacher” model, and then (b) use it to transfer information via distillation2 into a “student” model that only uses the regular features. The approach of a teacher only using privileged features is named generalized distillation (GenD) [LPSBV16], and the approach of a teacher using both privileged and regular features has been referred to as privileged feature distillation (PFD) [XLG+20]. In this paper we provide a detailed investigation – first via empirical ablation studies on moderatescale public and industrial-scale proprietary datasets with deep-learning-to-rank models, and second via rigorous theoretical analysis on simple linear models – into why and when privileged feature distillation works and when it does not. While this paper focuses on learning-to-rank, our results apply to regression/classification problems in general. As a summary, our main contributions are: • We evaluate PFD on three moderate-scale public ranking datasets: Yahoo, Istella, and MSLRWeb30k, and an industrial-scale proprietary dataset derived from Amazon search logs. • In all evaluated settings, PFD is better than or as good as the baselines: no-distillation, GenD (teacher model only uses privileged features), self-distillation (teacher model only uses regular features), and pretraining on privileged features then finetuning (when applicable) (Table 2). • We conduct comprehensive ablation studies for PFD. We find that – PFD is effective as long as the teacher loss dominates the distillation loss and the performance is not sensitive to ↵. Specifically, distillation loss is a linear combination of the loss w.r.t. data and the loss w.r.t. teacher predictions and ↵ is the mixing ratio (Figure 3). – While it is known that the gains from self-distillation (over a no-distillation one-shot training baseline) are larger when the positive labels are sparser, we see that these gains are further amplified by PFD; i.e. the relative gain of PFD over self-distillation also increases as the labels become sparser (Figure 4). – Non-monotonicity in the effectiveness of PFD: as the predictive power of a privileged feature increases, the resulting student performance initially increases but then decreases (Figure 5). • To provide a deeper insight into the landscape of privileged features and distillation, we next rigorously analyze it in a stylized setting involving linear models. We show that – PFD works because the teacher can explain away the variance arising from the privileged features, thus allowing the student to focus on the part it can predict. (Theorem 1). – The reason that GenD is inferior to PFD (as seen in our empirical evaluation) is because it results in a weaker teacher, and also because in the case where the privileged and regular features are independent, the teacher predictions appear as pure noise to the student (who cannot learn from them) (Remark 2). – A very predictive privileged feature induces high variance teacher predictions, which lead to inaccurate student estimates and inferior testing performance. This explains the observation that the most predictive privileged features do not give the best performance (i.e., the nonmonotonicity) in our empirical ablation studies (Theorem 2). The rest of the paper is organized as follows: Section 2 covers related works. Section 3 introduces the problem setup, the PFD algorithm and other algorithms for comparison. Section 4 presents empirical evaluation and ablation studies of PFD; and Section 5 presents theoretical insights. 2 Related Work Privileged features widely exist in different machine learning problems, including speech recognization [MM16], medical imaging [GCA+19], image super-resolution [LLKH20], etc [FA12, FTRS13, FKSH14, ALL17]. Privileged features are not accessible during testing either because they are too expensive to compute in real time, or because they are post-event features (thus cannot be used as input) [CM18]. Learning with privileged features is pioneered in [VV09], where they propose a framework named “learning using privileged information” (LUPI). At the core, LUPI uses privileged information to 2Here, by distillation we mean the standard practice of labeling the training dataset using teacher predictions, and using these as supervision targets in the training of the student model. distinguish between easy and hard examples. The methods are thus closely related to SVM, as the hardness of an example can be expressed by the slack variable. For instance, [VV09, PIVV10] propose the “SVM+” algorithm which generates slack variables from privileged features and learns an SVM based on regular features with those slack variables; [SQL13] proposes a pair-wise SVM algorithm for ranking, which uses privileged features to distinguish easy and hard pairs. [LHS14] presents a variation where the privileged features are used to generate importance weighting for different training samples. Empirically, [SER14] demonstrates that whether LUPI is effective critically depends on experimental settings (e.g., preprocessing, training/validation split, etc). [VI15] considers transferring the kernel function from a teacher SVM that only uses privileged features to a student SVM that only uses regular features; [LDX+20] extends the SVM+ algorithm to imperfect privileged features. Model distillation [HVD+15] is a common method for knowledge transfer, typically from a large model to a smaller one [PPA18, GYMT21]. Recent works have shown great empirical success in ranking problems [TW18, HAS+20, RPM+21] and even the cases where the teacher model and student model have the identical structure [FLT+18, QYT+21]. Using distillation to learn from privileged features are first proposed in [LPSBV16] as “generalized distillation” (GenD). It provides a unified view of LUPI and distillation. GenD, along with the variants [MM16, GMM19, LLKH20], train a teacher model with only privileged features and then train a student model to mimic the teacher’s predictions. PFD is recently proposed in [XLG+20], where the teacher model takes both regular and privileged features as input. PFD and GenD differ from the standard model distillation as they focus on exploiting privileged features but not on reducing the model size. [XLG+20] empirically demonstrates the superior performance of PFD for recommendation systems on a non-public data set. Understanding of privileged features distillation is lacking, despite the aforementioned empirical success. Previously, [PV10] shows that LUPI brings faster convergence under a strong assumption that the best classifier is realizable with only privileged features. [LPSBV16] shows that GenD enjoys a fast convergence rate. It assumes that the teacher model has a much smaller function class complexity than the student model, which does not match with PFD. [GCFY18] studies GenD under semi-supervised learning and shows that the benefits come from student function class complexity reduction. However, it does not quantify such reduction and the theory does not explain what is the benefit of using privileged features. To the best of our knowledge, there is no empirical or theoretical study explaining why PFD is effective. Other ways of utilizing privileged features are also previously proposed. [CJFY17] uses privileged information to learn a more diverse representation to improve image classification performance. [LLKH20, WZW+21] propose distillation schemes for better feature extraction from regular features. A more recent work [CJKB22] considers training a model with both regular and privileged features to obtain a better internal representation of the regular features. 3 Problem Setup and Algorithms Consider a learning-to-rank problem where each query-document pair has features x 2 X and z 2 Z and a label y 2 Y (e.g., click or human-annotated relevance) drawn from an unknown distribution D(y|x, z). Suppose x is the regular feature that is available during both training and testing and z is only available during training. Concretely, privileged feature is defined in the literature as below: Definition 1 (Privileged Feature [CJKB22]). For feature z that exists during training but not testing, we say z is a privileged feature if and only if I(y; z|x) := H(y|x) H(y|x, z) > 0. Conditional mutual information I(y; z|x) and conditional entropy H(·|·) follow from the standard notation of information theory. According to Definition 1, the privileged feature z provides extra predictive power of y. For the rest of this paper, we focus on the setting that z is a privileged feature. Remark 1. An implication of Definition 1 is that the privileged feature z can be independent of the regular feature x. In such cases, any transformation of z is not learnable from x, and therefore using z as auxiliary learning target does not help. Interestingly, PFD can still improve the student performance, even when z and x are independent (see Section 5). We consider the following general learning problem: we are given a labeled training set of size n, Slabel := {(xi, zi, yi)}i2[n], and a unlabeled training set of size m, Sunlabel := {(xi, zi)}i2[m]. Our goal is to generate good ranking based only on regular features x. For clarity of exposition, we only consider pointwise scoring functions F := {f | f : X 7! Y}, which generates a score for each document, and the ranking is induced by sorting the scores. The results in this paper can be easily extended to models beyond pointwise scoring functions (e.g., DASALC [QYZ+21]). The distinction between labeled and unlabeled datasets is for generality. The unlabeled dataset naturally appears in recommendation systems, where the majority of search logs do not contain any user interactions. Instead of taking all such logs as negative samples, it is more proper to view them as unlabeled data due to the lack of user engagement. For the logs that contain click, the documents therein with no click can be treated as negative samples. 3.1 Privileged features distillation PFD first trains a teacher model that takes both x and z as input to predict y, i.e., teacher function class is GPFD := {g | g : X ⇥ Z 7! Y}. For simplicity, we consider pointwise loss l : Y ⇥ Y 7! R in this section, while the method can be easily extended to other loss functions (see an extension to pairwise loss in Section 4). The privileged features distillation takes the following two steps: Step I: Training a teacher model gPFD 2 GPFD by minimizing the loss on the labeled dataset:P (xi,zi,yi)2Slabel l (g(xi, zi), yi). In practice, gradient-based optimizer is used for loss minimization. Step II: Training a student model by distillation. The teacher model gPFD trained from Step I is used to generate pseudo labels on Slabel and Sunlabel. Let Sall denote the union of Slabel and Sunlabel. The student model is trained by minimizing the following distillation loss: ↵ · X (xi,yi)2Slabel l(f(xi), yi) | {z } data loss + (1 ↵) · X (xi,zi)2Sall l (f(xi), gPFD(xi, zi)) | {z } teacher loss , (1) where ↵ 2 (0, 1) controls the mixing ratio between the data loss and teacher loss. The student model is trained by minimizing the distillation loss in Equation (1). 3.2 Other algorithms for comparisons Here we introduce two other algorithms for comparison. See illustration in Figure 1. GenD [LPSBV16] is a distillation method where the teacher model takes only privileged features as input, i.e., the teacher function class is GGenD = {g | g : Z 7! Y}. The teacher model gGenD 2 GGenD is obtained by minimizing P (zi,yi)2Slabel l (g(zi), yi). Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Self-distillation [FLT+18, QYT+21] is a distillation method where the teacher model has the same structure as the student model. Specifically, the teacher model gself-dist. 2 F is obtained by minimizingP (xi,yi)2Slabel l (g(xi), yi). Notice that F is also the student function class. Similar to PFD, the distillation loss is a linear combination of the data loss and teacher loss. Comparing PFD against self-distillation separates the benefits of adopting privileged features and distillation. 4 Experiments 4.1 Main results on public datasets We first evaluate the performance of PFD on three widely used public ranking datasets. Specifically, we use the Set1 from “Yahoo! Learn to rank challenge” [CC11]; “Istella Learning to Rank” dataset [DLN+16]; and Microsoft Learning to Rank “MSLR-Web30k” dataset [QL13]. We refer to them as “Yahoo”, “Istella” and “Web30k” throughout this section. Datasets overview and preprocessing. The training samples in all three datasets can be viewed as query groups, where each query group contains 1 query and multiple documents to be ranked. Each query-document pair is represented as a real-value feature vector (e.g., user dwelling time, tf-idf of document, etc. See [CC11] for detail). Further, each query-document pair has a human-annotated relevance score r 2 {0, 1, 2, 3, 4}. All datasets are preprocessed by removing query groups that contain no positive relevance score or have less than 10 documents. The features are transformed by the log1p transformation as in [ZWBN20, QYZ+21]. Binary label generation. In practice, binary label (e.g., click) is more commonly seen and easier to obtain than relevance score. For our experiments, we generate a binary label y for each querydocument pair based on the human-annotated relevance score r. Specifically: y = I (t · r +G1 > t · ⌧target +G0) , (2) where t is a temperature parameter and G1 and G0 follow the standard Gumbel distribution. It can be shown that y is 1 with probability (t · (r ⌧target)), where (·) is the sigmoid function (see Appendix A.1 for proof). For the rest of our experiments, we set t = 4 and ⌧target = 4.8 unless otherwise mentioned. We refer to the query groups that contain at least one y = 1 to be positive query groups, and other query groups are referred to as negative query groups. Regular and privileged features split. For each of the datasets, we sort the features according to the magnitude of their correlations with the binary label y and use the top 200, 50, and 40 features as privileged features for Yahoo, Istella, and Web30k, respectively. Other features are used as regular features. Please see Table 1 for dataset statistics after preprocessing and binary label generation. Ranking model and performance metric. The ranking model is a 5-layer fully connected neural network, which maps the query-document feature into a real-value score s 2 [0, 1]. The ranking b⇡ of documents is obtained by sorting the scores decreasingly, where b⇡(i) represents the ranked order of the i-th document. The ranking performance is measured by the NDCG@k metric: NDCG@k(b⇡,y) = DCG@k(b⇡,y) DCG@k(⇡⇤,y) , DCG@k(⇡,y) = X ⇡(i)k 2yi 1 log2(1 + ⇡(i)) , where ⇡⇤ is the optimal ranking obtained by sorting yi. PFD is effective for all three datasets. We evaluate the efficacy of PFD on all three aforementioned datasets, under both pointwise (RankBCE) and pairwise (RankNet [BSR+05]) loss functions (see definitions in Appendix A.2). Please see the evaluated algorithms and results in Table 2 (complete results with RankNet loss deferred to Table 4). Figure 2 shows the testing NDCG@8 curve on Yahoo and Web30k with RankBCE loss. Table 2 shows that PFD has the best performance on all evaluated settings. We remark that (1) the only difference between PFD and self-distillation is that the teacher in PFD additionally uses privileged features and therefore has better prediction accuracy than the teacher in self-distillation. Comparing PFD with self-distillation reveals the improvement of using “privileged features” for distillation; (2) the performance of GenD is worse than no-distillation on Istella and Web30k. The reason for such inferior performance is that the teacher model in GenD only uses privileged features (and not regular features). For Istella and Web30k, only using privileged features is not sufficient to generate good predictions. The teachers in GenD are also worse than no-distillation, see Appendix A.4. 4.2 Ablation study on public datasets PFD is not sensitive to ↵. In former experiments, we kept the mixing ratio of teacher loss and data loss to be ↵ = 0.5. Here we evaluate the sensitivity of PFD to parameter ↵. The experiments here use the Yahoo dataset and RankBCE loss. From the lefthand side of Figure 3, we see that PFD delivers good performance over a large range of ↵. However, it is worth noting that the teacher loss is typically much larger than the data loss (e.g., about 20 times larger in this set of experiments), since the teacher’s predictions are much denser learning targets. The right-hand side plot of Figure 3 takes the scale of both losses into consideration. It shows that PFD yields the best performance only when the teacher loss dominates the distillation loss. PFD brings a larger gain when the positive labels are sparse. Recall that we view negative query groups as unlabeled data. Here we evaluate the performance of PFD under different numbers of positive labels. Specifically, by reducing ⌧target from 4.8 to 0.4, we can increase the percentage of positive query groups (i.e., query groups with at least one y = 1). The relative improvement over baseline is shown in Figure 4. While it is known that distillation works better when there are more unlabeled samples, Figure 4 shows that PFD further amplifies such gains: the relative gain of PFD over self-distillation also increases as the positive labels become sparser. Such benefit is especially favorable in recommendation systems, where the positive labels (e.g., click) are naturally very sparse. Correlation between privileged features and target. It is believed that privileged features that are discriminative (e.g., high correlation with the target) lead to accurate teacher predictions, and thus benefit the distillation [XLG+20]. However, we show that PFD has poor performance when the privileged features are too discriminative. Specifically, we modify the experiment setting such that all the features in the datasets are used as regular features, while the privileged features z are generated according to z = I(t · r + G1 > t · ⌧privileged + G0), where G1 and G0 have the same values as in binary label y generation (Equation (2)). By changing ⌧privileged, we can obtain privileged features z with different correlations with the label y. For instance, when ⌧privileged = ⌧target, then z can perfectly predict y (since z = y by definition); and z becomes less discriminative when ⌧privileged gets smaller. Using z as the privileged feature, we have the PFD results in Figure 5. Notice that the privileged feature with the largest correlation with y does not give the best performance. We believe the reason is that as the correlation of z and y increases, the privileged feature becomes so “discriminative” that it can explain almost all the variance in y, even the noise. As a result, teacher predictions have high variance, which leads to high-variance student estimates and inferior testing performance. See Section 5.2 for theoretical insights. 4.3 Evaluation on Amazon’s dataset Dataset overview and ranking model. The dataset is derived from Amazon’s logs which contains query and product title text, the position at which the product was shown, and the user’s behaviors click, add-to-cart, and purchase. The ranking model is a multi-layer transformer that maps query and product title to an estimate of the purchase likelihood. The goal is to rank the products that are more likely to be purchased first. Efficacy of PFD. Here we evaluate the performance of PFD. Notice that the position is a privileged feature as it is not available as an input during online serving (it is the output of the ranking model that becomes position). Further, the click and add-to-cart are naturally privileged features, since one cannot know which of the product will be clicked or added to cart before showing the products. The baseline no-distillation model only takes query and product title as input, while the teacher models in PFD additionally take positions or clicks or add-to-cart as privileged features. As in public datasets, we only use the positive query groups to train the teacher model and use all query groups for distillation. We additionally use pretraining then finetuning as another baseline, as predicting “click” or “add-to-cart” can serve as pretraining tasks. The experiment results are shown in Table 3. Extension: Multi-teacher distillation. Inspired by [FSK+17, ZXHL18], we also evaluate the multiteacher distillation, where the student learns from more than one teacher. We adopt three privileged teachers which take positions, clicks, and add-to-cart as input, respectively. We calculate the loss w.r.t. each of the teachers’ predictions and use the average as “teacher loss” in Equation (1). Intuitively, the student model is trained to learn from an “ensemble” of teacher models. The multi-teacher PFD yields the best performance, an 11.2% improvement on testing NDCG@8 over the baseline model. 5 Theoretical Insights In this section, we present theoretical insights on why and when PFD works via analysis on linear models. While our empirical focus is on ranking problems, our theoretical insights are more general. Consider the following learning problem: the regular feature x 2 Rdx is drawn from a spherical Gaussian distribution N (0, Idx) and an un-observable feature u 2 Rdu is drawn from N (0, Idu). With two unknown parameters w⇤ 2 Rdx and v⇤ 2 Rdu , the label y is generated as following: y = x>w⇤ + u>v⇤ + ✏, ✏ ⇠ N (0, 2), (3) where ✏ represents the label noise. During training time, we observe the features z = u as privileged features. Suppose that the labeled training set Slabel = X 2 Rn⇥dx ,Z 2 Rn⇥dz ,y 2 Rn and the unlabeled set Sunlabel = X(u) 2 Rm⇥dx ,Z(u) 2 Rm⇥dz are generated according to the aforementioned data generation scheme. Let X(a) = [X;X(u)] 2 R(n+m)⇥dx and Z(a) = [Z;Z(u)] 2 R(n+m)⇥dz be the all the inputs from both labeled and unlabeled datasets. The goal is to learn to predict y with only regular feature x as input. 5.1 PFD works by reducing estimation variance Let bwreg denote the model learned by standard linear regression and bwpri be the model learned by privileged features distillation. For simplicity, we consider the case with ↵ = 0, i.e., only learning from the teacher’s prediction during distillation. Specifically, the standard linear regression only uses the set Slabel, and bwreg is obtained by regressing y on X. PFD, on the other hand, first uses Slabel to regress y on [X;Z]. The learned model is then used to generate predictions by for Slabel [ Sunlabel. Finally, by is regressed on X(a), which gives bwpri. We have the following result on the merit of PFD: Theorem 1. For standard linear regression, we have that EX,ykw⇤ bwregk22 = O ✓ dx · ( 2 + kv⇤k2) n ◆ . For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = O ✓ dx · 2 n ◆ +O ✓ dx · kv⇤k2 n+m ◆ +O ✓ 1 n ·m ◆ . Notice that var(y|x) = 2 + kv⇤k22, where 2 corresponds to the label noise and kv⇤k22 corresponds to the variance that can be explained by the privileged features. The result shows that PFD can explain a proper part of the variance in y by privileged features z. By learning from the teacher’s predictions, PFD can therefore reduce the variance of bwpri by exploiting the privileged features and the unlabeled samples. On the other hand, when learning with plain linear regression, the label variance corresponding to z is treated as noise, which leads to estimation with higher variance. Remark 2. Why GenD has worse-than-baseline performance. Notice that the teacher model in GenD uses privileged features only. GenD has inferior performance for two reasons: (1) the privileged features alone are not enough for the teacher model to generate good predictions; and (2) when z is independent of x, the predictions from the GenD’s teacher are not learnable for the student. 5.2 PFD has inferior performance when the privileged features are too discriminative To understand the performance of PFD under different privileged features, consider the setting where z 2 Rdz is the first dz coordinates of u. When dz = du, it recovers the setting in previous subsection. Notice that the larger dz becomes, the better (x; z) can predict y. While one might expect that dz = du (i.e., when the privileged features contain the most information about y) leads to the best distillation performance, our next result shows that such belief is not true in general. Let v⇤z be the part of v⇤ that corresponds to z (i.e., the first dz coordinates of v⇤), we have: Theorem 2. For privileged features distillation, we have that EX(a),Z(a),ykw⇤ bwprik22 = dx · ( 2 + kv⇤k22 kv⇤zk22) n dx dz 1 + dx · kv⇤zk22 n+m dx 1 +O ✓ 1 n ·m ◆ . As we increase dz from 0 to du, kv⇤zk also increases. The teacher, therefore, explains more variance in y and contributes to a smaller error in the student estimate bwpri. However, the denominator of the first term decreases as dz increases, which leads to a higher variance (thus less accurate) student parameter estimate bwpri. Combining the two effects, the privileged features z that contain the most information about y do not yield the best distillation performance. This matches the non-monotone observation in Figure 5; and the results in Table 3 where using add-to-cart (i.e., the most informative feature for predicting purchase) does not give the best PFD result. Example 1. Consider the data generation as shown in Equation (3). We set dx = 10, du = 10, n = 30,m = 200, and draw w⇤ from a spherical Gaussian distribution N (0, Idx). Further, we set = 15, and let v⇤ = [10, 9, · · · , 2, 1]. We evaluate the performance of the standard linear regression and the privileged features distillation with dz from 0 to 10. The results in Figure 6 shows that the most predictive z does not give the best PFD performance. 6 Conclusion In this paper, we take a step toward understanding PFD in learning-to-rank. We first evaluate PFD on three public ranking datasets (Yahoo, Istella, and MSLR-Web30k) and an industrial-scale ranking problem derived from Amazon’s search logs. Our evaluation shows that PFD has the best performance in all evaluated settings. We further conduct comprehensive empirical ablation studies, which demonstrates the efficacy and robustness of PFD and uncovers an interesting non-monotone behavior – as the predictive power of privileged features increase, the performance of PFD first increases and then decreases. Finally, we present theoretical insights for PFD via rigorous analysis for linear models. The theoretical results show that (1) PFD is effective by reducing the variance of student estimation; and (2) a too predictive privileged teacher produces high variance predictions, which lead to high variance (less accurate) student estimates and inferior testing performance.
1. What is the main contribution of the paper regarding the privileged feature distillation problem? 2. What are the strengths of the paper, particularly in terms of its empirical evaluation and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding its theoretical depth and experimental settings? 4. Do you have any concerns or suggestions regarding the paper's relevance to learning to rank?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies the privileged feature distillation (PFD) problem. The paper consists of two parts - empirical evaluation on public datasets and an industry dataset, followed by some theoretical analysis on linear models. The paper focuses on understanding an existing method instead of proposing new methods. The empirical part confirms that PFD is effective on several datasets. Some ablations are provided in terms of label sparsity, etc. On the three public datasets, the setting is controlled - binary labels are generated and privileged features are manually selected. The evaluation on the industry dataset looks more standard. On the theoretical part, the analysis is done on linear models. The insights found include 1) PFD works by reducing estimation variance. 2) Why too discriminative privileged features can hurt. Overall the reviewer finds this paper well written in general. The reviewer feels the empirical study meets the bar by performing on multiple datasets and comparing with sensible baselines. The theoretical part looks reasonable but not surprising. Strengths And Weaknesses Strength The reviewer has personal interest in the topic (though not sure about the interest from a wider group). The paper is generally well written. The reviewer feels the empirical evaluation meets the bar, by performing on multiple datasets and comparing with sensible baselines. Some ablations look interesting. The theoretical part is clear and focus on important aspects. Weakness The theoretical analysis is not particularly deep. The conclusions are intuitive and nothing is surprising. Focusing on linear models is ok but may not be very impressive. The controlled setting on public datasets seem a bit artificial and may bias towards the concerned methods. For example, the most correlated features are used as privileged features. Considering other options could be more comprehensive. The paper does not show any online experiments, which is the major motivation of PFD. Questions Please see above. One comment is that the reviewer does not find anything specific to learning to rank except for the dataset used. The reviewer does not feel it is a big issue but is a bit surprised. The authors may consider clarifying why learning to rank is the topic, as in the title and such. Limitations NA
NIPS
Title Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network Abstract The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center – the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts. 1 Introduction One of the fundamental problems in organic chemistry is the prediction of which products form as a result of a chemical reaction [16, 17]. While the products can be determined unambiguously for simple reactions, it is a major challenge for many complex organic reactions. Indeed, experimentation remains the primary manner in which reaction outcomes are analyzed. This is time consuming, expensive, and requires the help of an experienced chemist. The empirical approach is particularly limiting for the goal of automatically designing efficient reaction sequences that produce specific target molecule(s), a problem known as chemical retrosynthesis [16, 17]. Viewing molecules as labeled graphs over atoms, we propose to formulate the reaction prediction task as a graph transformation problem. A chemical reaction transforms input molecules (reactants) into new molecules (products) by performing a set of graph edits over reactant molecules, adding new edges and/or eliminating existing ones. Given that a typical reaction may involve more than 100 atoms, fully exploring all possible transformations is intractable. The computational challenge is how to reduce the space of possible edits effectively, and how to select the product from among the resulting candidates. The state-of-the-art solution is based on reaction templates (Figure 1). A reaction template specifies a molecular subgraph pattern to which it can be applied and the corresponding graph transformation. Since multiple templates can match a set of reactants, another model is trained to filter candidate products using standard supervised approaches. The key drawbacks of this approach are coverage and scalability. A large number of templates is required to ensure that at least one can reconstitute the correct product. The templates are currently either hand-crafted by experts [7, 1, 15] or generated from reaction databases with heuristic algorithms [2, 11, 3]. For example, Coley et al. [3] extracts 140K unique reaction templates from a database of 1 million reactions. Beyond coverage, applying a 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. template involves graph matching and this makes examining large numbers of templates prohibitively expensive. The current approach is therefore limited to small datasets with limited types of reactions. In this paper, we propose a template-free approach by learning to identify the reaction center, a small set of atoms/bonds that change from reactants to products. In our datasets, on average only 5.5% of the reactant molecules directly participate in the reaction. The small size of the reaction centers together with additional constraints on bond formations enables us to directly enumerate candidate products. Our forward-prediction approach is then divided into two key parts: (1) learning to identify reaction centers and (2) learning to rank the resulting enumerated candidate products. Our technical approach builds on neural embedding of the Weisfeiler-Lehman isomorphism test. We incorporate a specific attention mechanism to identify reaction centers while leveraging distal chemical effects not accounted for in related convolutional representations [5, 4]. Moreover, we propose a novel Weisfeiler-Lehman Difference Network to learn to represent and efficiently rank candidate transformations between reactants and products. We evaluate our method on two datasets derived from the USPTO [13], and compare our methods to the current top performing system [3]. Our method achieves 83.9% and 77.9% accuracy on two datasets, outperforming the baseline approach by 10%, while running 140 times faster. Finally, we demonstrate that the model outperforms domain experts by a large margin. 2 Related Work Template-based Approach Existing machine learning models for product prediction are mostly built on reaction templates. These approaches differ in the way templates are specified and in the way the final product is selected from multiple candidates. For instance, Wei et al. [18] learns to select among 16 pre-specified, hand-encoded templates, given fingerprints of reactants and reagents. While this work was developed on a narrow range of chemical reaction types, it is among the first implementations that demonstrates the potential of neural models for analyzing chemical reactions. More recent work has demonstrated the power of neural methods on a broader set of reactions. For instance, Segler and Waller [14] and Coley et al. [3] use a data-driven approach to obtain a large set of templates, and then employ a neural model to rank the candidates. The key difference between these approaches is the representation of the reaction. In Segler and Waller [14], molecules are represented based on their Morgan fingerprints, while Coley et al. [3] represents reactions by the features of atoms and bonds in the reaction center. However, the template-based architecture limits both of these methods in scaling up to larger datasets with more diversity. Template-free Approach Kayala et al. [8] also presented a template-free approach to predict reaction outcomes. Our approach differs from theirs in several ways. First, Kayala et al. operates at the mechanistic level - identifying elementary mechanistic steps rather than the overall transformations from reactants to products. Since most reactions consist of many mechanistic steps, their approach requires multiple predictions to fulfill an entire reaction. Our approach operates at the graph level - predicting transformations from reactants to products in a single step. Second, mechanistic descriptions of reactions are not given in existing reaction databases. Therefore, Kayala et al. created their training set based on a mechanistic-level template-driven expert system. In contrast, our model is learned directly from real-world experimental data. Third, Kayala et al. uses feed-forward neural networks where atoms and graphs are represented by molecular fingerprints and additional hand-crafted features. Our approach builds from graph neural networks to encode graph structures. Molecular Graph Neural Networks The question of molecular graph representation is a key issue in reaction modeling. In computational chemistry, molecules are often represented with Morgan Fingerprints, boolean vectors that reflect the presence of various substructures in a given molecule. Duvenaud et al. [5] developed a neural version of Morgan Fingerprints, where each convolution operation aggregates features of neighboring nodes as a replacement of the fixed hashing function. This representation was further expanded by Kearnes et al. [9] into graph convolution models. Dai et al. [4] consider a different architecture where a molecular graph is viewed as a latent variable graphical model. Their recurrent model is derived from Belief Propagation-like algorithms. Gilmer et al. [6] generalized all previous architectures into message-passing network, and applied them to quantum chemistry. The closest to our work is the Weisfeiler-Lehman Kernel Network proposed by Lei et al. [12]. This recurrent model is derived from the Weisfeiler-Lehman kernel that produces isomorphism-invariant representations of molecular graphs. In this paper, we further enhance this representation to capture graph transformations for reaction prediction. 3 Overview Our approach bypasses reaction templates by learning a reaction center identifier. Specifically, we train a neural network that operates on the reactant graph to predict a reactivity score for every pair of atoms (Section 3.1). A reaction center is then selected by picking a small number of atom pairs with the highest reactivity scores. After identifying the reaction center, we generate possible product candidates by enumerating possible bond configurations between atoms in the reaction center (Section 3.2) subject to chemical constraints. We train another neural network to rank these product candidates (represented as graphs, together with the reactants) so that the correct reaction outcome is ranked highest (Section 3.3). The overall pipeline is summarized in Figure 2. Before describing the two modules in detail, we formally define some key concepts used throughout the paper. Chemical Reaction A chemical reaction is a pair of molecular graphs (Gr, Gp), where Gr is called the reactants and Gp the products. A molecular graph is described as G = (V,E), where V = {a1, a2, · · · , an} is the set of atoms and E = {b1, b2, · · · , bm} is the set of associated bonds of varying types (single, double, aromatic, etc.). Note that Gr is has multiple connected components since there are multiple molecules comprising the reactants. The reactions used for training are atom-mapped so that each atom in the product graph has a unique corresponding atom in the reactants. Reaction Center A reaction center is a set of atom pairs {(ai, aj)}, where the bond type between ai and aj differs from Gr to Gp. In other words, a reaction center is a minimal set of graph edits needed to transform reactants to products. Since the reported reactions in the training set are atom-mapped, reaction centers can be identified automatically given the product. 3.1 Reaction Center Identification In a given reaction R = (Gr, Gp), each atom pair (au, av) in Gr is associated with a reactivity label yuv 2 {0, 1} specifying whether their relation differs between reactants and products. The label is determined by comparing Gr and Gp with the help of atom-mapping. We predict the label on the basis of learned atom representations that incorporate contextual cues from the surrounding chemical environment. In particular, we build on a Weisfeiler-Lehman Network (WLN) that has shown superior results against other learned graph representations in the narrower setting of predicting chemical properties of individual molecules [12]. 3.1.1 Weisfeiler-Lehman Network (WLN) The WLN is inspired by the Weisfeiler-Lehman isomorphism test for labeled graphs. The architecture is designed to embed the computations inherent in WL isomorphism testing to generate learned isomorphism-invariant representations for atoms. WL Isomorphism Test The key idea of the isomorphism test is to repeatedly augment node labels by the sorted set of node labels of neighbor nodes and to compress these augmented labels into new, short labels. The initial labeling is the atom element. In each iteration, its label is augmented with the element labels of its neighbors. Such a multi-set label is compactly represented as a new label by a hash function. Let c(L)v be the final label of atom av . The molecular graph G = (V,E) is represented as a set {(c(L)u , buv, c(L)v ) | (u, v) 2 E}, where buv is the bond type between u and v. Two graphs are said to be isomorphic if their set representations are the same. The number of distinct labels grows exponentially with the number of iterations L. WL Network The discrete relabeling process does not directly generalize to continuous feature vectors. Instead, we appeal to neural networks to continuously embed the computations inherent in the WL test. Let r be the analogous continuous relabeling function. Then a node v 2 G with neighbor nodes N(v), node features fv , and edge features fuv is “relabeled” according to r(v) = ⌧(U1fv +U2 X u2N(v) ⌧(V[fu, fuv])) (1) where ⌧(·) could be any non-linear function. We apply this relabeling operation iteratively to obtain context-dependent atom vectors h(l)v = ⌧(U1h (l 1) v +U2 X u2N(v) ⌧(V[h(l 1)u , fuv])) (1 l L) (2) where h(0)v = fv and U1,U2,V are shared across layers. The final atom representations arise from mimicking the set comparison function in the WL isomorphism test, yielding cv = X u2N(v) W(0)h(L)u W(1)fuv W(2)h(L)v (3) The set comparison here is realized by matching each rank-1 edge tensor h(L)u ⌦ fuv ⌦ h(L)v to a set of reference edges also cast as rank-1 tensors W(0)[k] ⌦ W(1)[k] ⌦ W(2)[k], where W[k] is the k-th row of matrix W. In other words, Eq. 3 above could be written as cv[k] = X u2N(v) D W(0)[k]⌦W(1)[k]⌦W(2)[k], h(L)u ⌦ fuv ⌦ h(L)v E (4) The resulting cv is a vector representation that captures the local chemical environment of the atom (through relabeling) and involves a comparison against a learned set of reference environments. The representation of the whole graph G is simply the sum over all the atom representations: cG = P v cv . 3.1.2 Finding Reaction Centers with WLN We present two models to predict reactivity: the local and global models. Our local model is based directly on the atom representations cu and cv in predicting label yuv . The global model, on the other hand, selectively incorporates distal chemical effects with the goal of capturing the fact that atoms outside of the reaction center may be necessary for the reaction to occur. For example, the reaction center may be influenced by certain reagents1. We incorporate these distal effects into the global model through an attention mechanism. Local Model Let cu, cv be the atom representations for atoms u and v, respectively, as returned by the WLN. We predict the reactivity score of (u, v) by passing these through another neural network: suv = uT ⌧(Macu +Macv +Mbbuv) (5) where (·) is the sigmoid function, and buv is an additional feature vector that encodes auxiliary information about the pair such as whether the two atoms are in different molecules or which type of bond connects them. Global Model Let ↵uv be the attention score of atom v on atom u. The global context representation ˜cu of atom u is calculated as the weighted sum of all reactant atoms where the weight comes from the attention module: ˜cu = X v ↵uvcv; ↵uv = uT ⌧(Pacu +Pacv +Pbbuv) (6) suv = uT ⌧(Ma˜cu +Ma˜cv +Mbbuv) (7) Note that the attention is obtained with sigmoid rather than softmax non-linearity since there may be multiple atoms relevant to a particular atom u. Training Both models are trained to minimize the following loss function: L(T ) = X R2T X u 6=v2R yuv log(suv) + (1 yuv) log(1 suv) (8) Here we predict each label independently because of the large number of variables. For a given reaction with N atoms, we need to predict the reactivity score of O(N2) pairs. This quadratic complexity prohibits us from adding higher-order dependencies between different pairs. Nonetheless, we found independent prediction yields sufficiently good performance. 3.2 Candidate Generation We select the top K atom pairs with the highest predicted reactivity score and designate them, collectively, as the reaction center. The set of candidate products are then obtained by enumerating all possible bond configuration changes within the set. While the resulting set of candidate products is exponential in K, many can be ruled out by invoking additional constraints. For example, every atom has a maximum number of neighbors they can connect to (valence constraint). We also leverage the statistical bias that reaction centers are very unlikely to consist of disconnected components (connectivity constraint). Some multi-step reactions do exist that violate the connectivity constraint. As we will show, the set of candidates arising from this procedure is more compact than those arising from templates without sacrificing coverage. 3.3 Candidate Ranking The training set for candidate ranking consists of lists T = {(r, p0, p1, · · · , pm)}, where r are the reactants, p0 is the known product, and p1, · · · , pm are other enumerated candidate products. The goal is to learn a scoring function that ranks the highest known product p0. The challenge in ranking candidate products is again representational. We must learn to represent (r, p) in a manner that can focus on the key difference between the reactants r and products p while also incorporating the necessary chemical contexts surrounding the changes. 1Molecules that do not typically contribute atoms to the product but are nevertheless necessary for the reaction to proceed. We again propose two alternative models to score each candidate pair (r, p). The first model naively represents a reaction by summing difference vectors of all atom representations obtained from a WLN on the associated connected components. Our second and improved model, called WLDN, takes into account higher order interactions between these differences vectors. WLN with Sum-Pooling Let c(pi)v be the learned atom representation of atom v in candidate product molecule pi. We define difference vector d (pi) v pertaining to atom v as follows: d(pi)v = c (pi) v c(r)v ; s(pi) = uT ⌧(M X v2pi d(pi)v ) (9) Recall that the reactants and products are atom-mapped so we can use v to refer to the same atom. The pooling operation is a simple sum over these difference vectors, resulting in a single vector for each (r, pi) pair. This vector is then fed into another neural network to score the candidate product pi. Weisfeiler-Lehman Difference Network (WLDN) Instead of simply summing all difference vectors, the WLDN operates on another graph called a difference graph. A difference graph D(r, pi) is defined as a molecular graph which has the same atoms and bonds as pi, with atom v’s feature vector replaced by d(pi)v . Operating on the difference graph has several benefits. First, in D(r, pi), atom v’s feature vector deviates from zero only if it is close to the reaction center, thus focusing the processing on the reaction center and its immediate context. Second, D(r, pi) explicates neighbor dependencies between difference vectors. The WLDN maps this graph-based representation into a fixed-length vector, by applying a separately parameterized WLN on top of D(r, pi): h(pi,l)v = ⌧ 0 @U1h(pi,l 1)v +U2 X u2N(v) ⌧ ⇣ V[h(pi,l 1)u , fuv] ⌘ 1 A (1 l L) (10) d(pi,L)v = X u2N(v) W(0)h(pi,L)u W(1)fuv W(2)h(pi,L)v (11) where h(pi,0)v = d (pi) v . The final score of pi is s(pi) = uT ⌧(M P v2pi d (pi,L) v ). Training Both models are trained to minimize the softmax log-likelihood objective over the scores {s(p0), s(p1), · · · , s(pm)} where s(p0) corresponds to the target. 4 Experiments Data As a source of data for our experiments, we used reactions from USPTO granted patents, collected by Lowe [13]. After removing duplicates and erroneous reactions, we obtained a set of 480K reactions, to which we refer in the paper as USPTO. This dataset is divided into 400K, 40K, and 40K for training, development, and testing purposes.2 In addition, for comparison purposes we report the results on the subset of 15K reaction from this dataset (referred as USPTO-15K) used by Coley et al. [3]. They selected this subset to include reactions covered by the 1.7K most common templates. We follow their split, with 10.5K, 1.5K, and 3K for training, development, and testing. Setup for Reaction Center Identification The output of this component consists of K atom pairs with the highest reactivity scores. We compute the coverage as the proportion of reactions where all atom pairs in the true reaction center are predicted by the model, i.e., where the recorded product is found in the model-generated candidate set. The model features reflect basic chemical properties of atoms and bonds. Atom-level features include its elemental identity, degree of connectivity, number of attached hydrogen atoms, implicit valence, and aromaticity. Bond-level features include bond type (single, double, triple, or aromatic), whether it is conjugated, and whether the bond is part of a ring. Both our local and global models are build upon a Weisfeiler-Lehman Network, with unrolled depth 3. All models are optimized with Adam [10], with learning rate decay factor 0.9. 2Code and data available at https://github.com/wengong-jin/nips17-rexgen Setup for Candidate Ranking The goal of this evaluation is to determine whether the model can select the correct product from a set of candidates derived from reaction center. We first compare model accuracy against the top-performing template-based approach by Coley et al. [3]. This approach employs frequency-based heuristics to construct reaction templates and then uses a neural model to rank the derived candidates. As explained above, due to the scalability issues associated with this baseline, we can only compare on USPTO-15K, which the authors restricted to contain only examples that were instantiated by their most popular templates. For this experiment, we set K = 8 for candidate generation, which achieves 90% coverage and yields 250 candidates per reaction. To compare a standard WLN representation against its counterpart with Difference Networks (WLDN), we train them under the same setup on USPTO-15K, fixing the number of parameters to 650K. Next, we evaluate our model on USPTO for large scale evaluation. We set K = 6 for candidate generation and report the result of the best model architecture. Finally, to factorize the coverage of candidate selection and the accuracy of candidate ranking, we consider two evaluation scenarios: (1) the candidate list as derived from reaction center; (2) the above candidate list augmented with the true product if not found. This latter setup is marked with (*). 4.1 Results Reaction Center Identification Table 1a reports the coverage of the model as compared to the real reaction core. Clearly, the coverage depends on the number of atom pairs K, with the higher coverage for larger values of K. These results demonstrate that even for K = 8, the model achieves high coverage, above 90%. The results also clearly demonstrate the advantage of the global model over the local one, which is consistent across all experiments. The superiority of the global model is in line with the well-known fact that reactivity depends on more than the immediate local environment surrounding the reaction center. The presence of certain functional groups (structural motifs that appear frequently in organic chemistry) far from the reaction center can promote or inhibit different modes of reactivity. Moreover, reactivity is often influenced by the presence of reagents, which are separate molecules that may not directly contribute atoms to the product. Consideration of both of these factors necessitates the use of a model that can account for long-range dependencies between atoms. Figure 3 depicts one such example, where the observed reactivity can be attributed to the presence of a reagent molecule that is completely disconnected from the reaction center itself. While the local model fails to anticipate this reactivity, the global one accurately predicts the reaction center. The attention map highlights the reagent molecule as the determinant context. Candidate Generation Here we compare the coverage of the generated candidates with the templatebased model. Table 1a shows that for K = 6, our model generates an average of 60.1 candidates and reaches a coverage of 89.8%. The template-based baseline requires 5006 templates extracted from the training data (corresponding to a minimum of five precedent reactions) to achieve 90.1% coverage with an average of 482 candidates per example. This weakness of the baseline model can be explained by the difficulty in defining general heuristics with which to extract templates from reaction examples. It is possible to define different levels of specificity based on the extent to which atoms surrounding the reaction center are included or generalized [11]. This introduces an unavoidable trade-off between generality (fewer templates, higher coverage, more candidates) and specificity (more templates, less coverage, fewer candidates). Figure 4a illustrates one reaction example where the corresponding template is rare due to the adjacency of the reaction center to both a carbonyl group and a phenyl ring. Because adjacency to either group can influence reactivity, both are included as part of the template, although reactivity in this case does not require the additional specification of the phenyl group. The massive number of templates required for high coverage is a serious impediment for the template approach because each template application requires solving a subgraph isomorphism problem. Specifically, it takes on average 7 seconds to apply the 5006 templates to a test instance, while our method takes less than 50 ms, about 140 times faster. Candidate Ranking Table 1b reports the performance on the product prediction task. Since the baseline templates from [3] were optimized on the test and have 100% coverage, we compare its performance against our models to which the correct product is added (WLN(*) and WLDN(*)). Our model clearly outperforms the baseline by a wide margin. Even when compared against the candidates automatically computed from the reaction center, WLDN outperforms the baseline in top-1 accuracy. The results also demonstrate that the WLDN model consistently outperforms the WLN model. This is consistent with our intuition that modeling higher order dependencies between the difference vectors is advantageous over simply summing over them. Table 1b also shows the model performance improves when tested on the full USPTO dataset. We further analyze model performance based on the frequency of the underlying transformation as reflected by the the number of template precedents. In Figure 4b we group the test instances according to their frequency and report the coverage of the global model and the mean reciprocal rank (MRR) of the WLDN model on each of them. As expected, our approach achieves the highest performance for frequent reactions. However, it maintains reasonable coverage and ranking accuracy even for rare reactions, which are particularly challenging for template-based methods. 4.2 Human Evaluation Study We randomly selected 80 reaction examples from the test set, ten from each of the template popularity intervals of Figure 4b, and asked ten chemists to predict the outcome of each given its reactants. The average accuracy across the ten performers was 48.2%. Our model achieves an accuracy of 69.1%, very close to the best individual performer who scored 72.0%. 5 Conclusion We proposed a novel template-free approach for chemical reaction prediction. Instead of generating candidate products by reaction templates, we first predict a small set of atoms/bonds in reaction center, and then produce candidate products by enumerating all possible bond configuration changes within the set. Compared to template based approach, our framework runs 140 times faster, allowing us to scale to much larger reaction databases. Both our reaction center identifier and candidate ranking model build from Weisfeiler-Lehman Network and its variants that learn compact representation of graphs and reactions. We hope our work will encourage both computer scientists and chemists to explore fully data driven approaches for this task. Acknowledgement We thank Tim Jamison, Darsh Shah, Karthik Narasimhan and the reviewers for their helpful comments. We also thank members of the MIT Department of Chemistry and Department of Chemical Engineering who participated in the human benchmarking study. This work was supported by the DARPA Make-It program under contract ARO W911NF-16-2-0023.
1. What is the main contribution of the paper in the field of chemistry? 2. What are the strengths of the proposed approach, particularly in its application to chemistry? 3. What are the weaknesses of the paper regarding its architecture and methodology? 4. How does the reviewer assess the relevance and completeness of the references cited in the paper? 5. Are there any recent works related to the proposed approach that the reviewer thinks should have been included in the paper?
Review
Review The paper proposes to model molecular reactions using a Weisfeihler-Lehman graph neural network, an architecture that was previously introduced as a neural network counterpart of the Weisfeihler-Lehman graph kernel. The novelty of the paper resides mainly in the careful application of this neural network framework to chemistry, for predicting reaction centers and ranking chemical reactions. The paper is well written, and most of the neural networks architectural choices for each problem look sound. In WLN+sumpooling, the sum-pooling of differences reduces to the difference of sums, which looses all spatial information. There seems to be an intermediate step of complexity which is therefore missing between the WLN+sumpooling and the WDLN. Applying at least one nonlinear transformation between the difference and pooling operations could have been considered. The authors cite a number of relevant papers both in machine learning and chemistry. To those, one could have also added the original Weisfeiler-Lehman kernel paper as well as some recent papers that use similar graph neural networks (DTNN, MPNN) to predict molecular properties without the bond structure.
NIPS
Title Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network Abstract The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center – the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts. 1 Introduction One of the fundamental problems in organic chemistry is the prediction of which products form as a result of a chemical reaction [16, 17]. While the products can be determined unambiguously for simple reactions, it is a major challenge for many complex organic reactions. Indeed, experimentation remains the primary manner in which reaction outcomes are analyzed. This is time consuming, expensive, and requires the help of an experienced chemist. The empirical approach is particularly limiting for the goal of automatically designing efficient reaction sequences that produce specific target molecule(s), a problem known as chemical retrosynthesis [16, 17]. Viewing molecules as labeled graphs over atoms, we propose to formulate the reaction prediction task as a graph transformation problem. A chemical reaction transforms input molecules (reactants) into new molecules (products) by performing a set of graph edits over reactant molecules, adding new edges and/or eliminating existing ones. Given that a typical reaction may involve more than 100 atoms, fully exploring all possible transformations is intractable. The computational challenge is how to reduce the space of possible edits effectively, and how to select the product from among the resulting candidates. The state-of-the-art solution is based on reaction templates (Figure 1). A reaction template specifies a molecular subgraph pattern to which it can be applied and the corresponding graph transformation. Since multiple templates can match a set of reactants, another model is trained to filter candidate products using standard supervised approaches. The key drawbacks of this approach are coverage and scalability. A large number of templates is required to ensure that at least one can reconstitute the correct product. The templates are currently either hand-crafted by experts [7, 1, 15] or generated from reaction databases with heuristic algorithms [2, 11, 3]. For example, Coley et al. [3] extracts 140K unique reaction templates from a database of 1 million reactions. Beyond coverage, applying a 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. template involves graph matching and this makes examining large numbers of templates prohibitively expensive. The current approach is therefore limited to small datasets with limited types of reactions. In this paper, we propose a template-free approach by learning to identify the reaction center, a small set of atoms/bonds that change from reactants to products. In our datasets, on average only 5.5% of the reactant molecules directly participate in the reaction. The small size of the reaction centers together with additional constraints on bond formations enables us to directly enumerate candidate products. Our forward-prediction approach is then divided into two key parts: (1) learning to identify reaction centers and (2) learning to rank the resulting enumerated candidate products. Our technical approach builds on neural embedding of the Weisfeiler-Lehman isomorphism test. We incorporate a specific attention mechanism to identify reaction centers while leveraging distal chemical effects not accounted for in related convolutional representations [5, 4]. Moreover, we propose a novel Weisfeiler-Lehman Difference Network to learn to represent and efficiently rank candidate transformations between reactants and products. We evaluate our method on two datasets derived from the USPTO [13], and compare our methods to the current top performing system [3]. Our method achieves 83.9% and 77.9% accuracy on two datasets, outperforming the baseline approach by 10%, while running 140 times faster. Finally, we demonstrate that the model outperforms domain experts by a large margin. 2 Related Work Template-based Approach Existing machine learning models for product prediction are mostly built on reaction templates. These approaches differ in the way templates are specified and in the way the final product is selected from multiple candidates. For instance, Wei et al. [18] learns to select among 16 pre-specified, hand-encoded templates, given fingerprints of reactants and reagents. While this work was developed on a narrow range of chemical reaction types, it is among the first implementations that demonstrates the potential of neural models for analyzing chemical reactions. More recent work has demonstrated the power of neural methods on a broader set of reactions. For instance, Segler and Waller [14] and Coley et al. [3] use a data-driven approach to obtain a large set of templates, and then employ a neural model to rank the candidates. The key difference between these approaches is the representation of the reaction. In Segler and Waller [14], molecules are represented based on their Morgan fingerprints, while Coley et al. [3] represents reactions by the features of atoms and bonds in the reaction center. However, the template-based architecture limits both of these methods in scaling up to larger datasets with more diversity. Template-free Approach Kayala et al. [8] also presented a template-free approach to predict reaction outcomes. Our approach differs from theirs in several ways. First, Kayala et al. operates at the mechanistic level - identifying elementary mechanistic steps rather than the overall transformations from reactants to products. Since most reactions consist of many mechanistic steps, their approach requires multiple predictions to fulfill an entire reaction. Our approach operates at the graph level - predicting transformations from reactants to products in a single step. Second, mechanistic descriptions of reactions are not given in existing reaction databases. Therefore, Kayala et al. created their training set based on a mechanistic-level template-driven expert system. In contrast, our model is learned directly from real-world experimental data. Third, Kayala et al. uses feed-forward neural networks where atoms and graphs are represented by molecular fingerprints and additional hand-crafted features. Our approach builds from graph neural networks to encode graph structures. Molecular Graph Neural Networks The question of molecular graph representation is a key issue in reaction modeling. In computational chemistry, molecules are often represented with Morgan Fingerprints, boolean vectors that reflect the presence of various substructures in a given molecule. Duvenaud et al. [5] developed a neural version of Morgan Fingerprints, where each convolution operation aggregates features of neighboring nodes as a replacement of the fixed hashing function. This representation was further expanded by Kearnes et al. [9] into graph convolution models. Dai et al. [4] consider a different architecture where a molecular graph is viewed as a latent variable graphical model. Their recurrent model is derived from Belief Propagation-like algorithms. Gilmer et al. [6] generalized all previous architectures into message-passing network, and applied them to quantum chemistry. The closest to our work is the Weisfeiler-Lehman Kernel Network proposed by Lei et al. [12]. This recurrent model is derived from the Weisfeiler-Lehman kernel that produces isomorphism-invariant representations of molecular graphs. In this paper, we further enhance this representation to capture graph transformations for reaction prediction. 3 Overview Our approach bypasses reaction templates by learning a reaction center identifier. Specifically, we train a neural network that operates on the reactant graph to predict a reactivity score for every pair of atoms (Section 3.1). A reaction center is then selected by picking a small number of atom pairs with the highest reactivity scores. After identifying the reaction center, we generate possible product candidates by enumerating possible bond configurations between atoms in the reaction center (Section 3.2) subject to chemical constraints. We train another neural network to rank these product candidates (represented as graphs, together with the reactants) so that the correct reaction outcome is ranked highest (Section 3.3). The overall pipeline is summarized in Figure 2. Before describing the two modules in detail, we formally define some key concepts used throughout the paper. Chemical Reaction A chemical reaction is a pair of molecular graphs (Gr, Gp), where Gr is called the reactants and Gp the products. A molecular graph is described as G = (V,E), where V = {a1, a2, · · · , an} is the set of atoms and E = {b1, b2, · · · , bm} is the set of associated bonds of varying types (single, double, aromatic, etc.). Note that Gr is has multiple connected components since there are multiple molecules comprising the reactants. The reactions used for training are atom-mapped so that each atom in the product graph has a unique corresponding atom in the reactants. Reaction Center A reaction center is a set of atom pairs {(ai, aj)}, where the bond type between ai and aj differs from Gr to Gp. In other words, a reaction center is a minimal set of graph edits needed to transform reactants to products. Since the reported reactions in the training set are atom-mapped, reaction centers can be identified automatically given the product. 3.1 Reaction Center Identification In a given reaction R = (Gr, Gp), each atom pair (au, av) in Gr is associated with a reactivity label yuv 2 {0, 1} specifying whether their relation differs between reactants and products. The label is determined by comparing Gr and Gp with the help of atom-mapping. We predict the label on the basis of learned atom representations that incorporate contextual cues from the surrounding chemical environment. In particular, we build on a Weisfeiler-Lehman Network (WLN) that has shown superior results against other learned graph representations in the narrower setting of predicting chemical properties of individual molecules [12]. 3.1.1 Weisfeiler-Lehman Network (WLN) The WLN is inspired by the Weisfeiler-Lehman isomorphism test for labeled graphs. The architecture is designed to embed the computations inherent in WL isomorphism testing to generate learned isomorphism-invariant representations for atoms. WL Isomorphism Test The key idea of the isomorphism test is to repeatedly augment node labels by the sorted set of node labels of neighbor nodes and to compress these augmented labels into new, short labels. The initial labeling is the atom element. In each iteration, its label is augmented with the element labels of its neighbors. Such a multi-set label is compactly represented as a new label by a hash function. Let c(L)v be the final label of atom av . The molecular graph G = (V,E) is represented as a set {(c(L)u , buv, c(L)v ) | (u, v) 2 E}, where buv is the bond type between u and v. Two graphs are said to be isomorphic if their set representations are the same. The number of distinct labels grows exponentially with the number of iterations L. WL Network The discrete relabeling process does not directly generalize to continuous feature vectors. Instead, we appeal to neural networks to continuously embed the computations inherent in the WL test. Let r be the analogous continuous relabeling function. Then a node v 2 G with neighbor nodes N(v), node features fv , and edge features fuv is “relabeled” according to r(v) = ⌧(U1fv +U2 X u2N(v) ⌧(V[fu, fuv])) (1) where ⌧(·) could be any non-linear function. We apply this relabeling operation iteratively to obtain context-dependent atom vectors h(l)v = ⌧(U1h (l 1) v +U2 X u2N(v) ⌧(V[h(l 1)u , fuv])) (1 l L) (2) where h(0)v = fv and U1,U2,V are shared across layers. The final atom representations arise from mimicking the set comparison function in the WL isomorphism test, yielding cv = X u2N(v) W(0)h(L)u W(1)fuv W(2)h(L)v (3) The set comparison here is realized by matching each rank-1 edge tensor h(L)u ⌦ fuv ⌦ h(L)v to a set of reference edges also cast as rank-1 tensors W(0)[k] ⌦ W(1)[k] ⌦ W(2)[k], where W[k] is the k-th row of matrix W. In other words, Eq. 3 above could be written as cv[k] = X u2N(v) D W(0)[k]⌦W(1)[k]⌦W(2)[k], h(L)u ⌦ fuv ⌦ h(L)v E (4) The resulting cv is a vector representation that captures the local chemical environment of the atom (through relabeling) and involves a comparison against a learned set of reference environments. The representation of the whole graph G is simply the sum over all the atom representations: cG = P v cv . 3.1.2 Finding Reaction Centers with WLN We present two models to predict reactivity: the local and global models. Our local model is based directly on the atom representations cu and cv in predicting label yuv . The global model, on the other hand, selectively incorporates distal chemical effects with the goal of capturing the fact that atoms outside of the reaction center may be necessary for the reaction to occur. For example, the reaction center may be influenced by certain reagents1. We incorporate these distal effects into the global model through an attention mechanism. Local Model Let cu, cv be the atom representations for atoms u and v, respectively, as returned by the WLN. We predict the reactivity score of (u, v) by passing these through another neural network: suv = uT ⌧(Macu +Macv +Mbbuv) (5) where (·) is the sigmoid function, and buv is an additional feature vector that encodes auxiliary information about the pair such as whether the two atoms are in different molecules or which type of bond connects them. Global Model Let ↵uv be the attention score of atom v on atom u. The global context representation ˜cu of atom u is calculated as the weighted sum of all reactant atoms where the weight comes from the attention module: ˜cu = X v ↵uvcv; ↵uv = uT ⌧(Pacu +Pacv +Pbbuv) (6) suv = uT ⌧(Ma˜cu +Ma˜cv +Mbbuv) (7) Note that the attention is obtained with sigmoid rather than softmax non-linearity since there may be multiple atoms relevant to a particular atom u. Training Both models are trained to minimize the following loss function: L(T ) = X R2T X u 6=v2R yuv log(suv) + (1 yuv) log(1 suv) (8) Here we predict each label independently because of the large number of variables. For a given reaction with N atoms, we need to predict the reactivity score of O(N2) pairs. This quadratic complexity prohibits us from adding higher-order dependencies between different pairs. Nonetheless, we found independent prediction yields sufficiently good performance. 3.2 Candidate Generation We select the top K atom pairs with the highest predicted reactivity score and designate them, collectively, as the reaction center. The set of candidate products are then obtained by enumerating all possible bond configuration changes within the set. While the resulting set of candidate products is exponential in K, many can be ruled out by invoking additional constraints. For example, every atom has a maximum number of neighbors they can connect to (valence constraint). We also leverage the statistical bias that reaction centers are very unlikely to consist of disconnected components (connectivity constraint). Some multi-step reactions do exist that violate the connectivity constraint. As we will show, the set of candidates arising from this procedure is more compact than those arising from templates without sacrificing coverage. 3.3 Candidate Ranking The training set for candidate ranking consists of lists T = {(r, p0, p1, · · · , pm)}, where r are the reactants, p0 is the known product, and p1, · · · , pm are other enumerated candidate products. The goal is to learn a scoring function that ranks the highest known product p0. The challenge in ranking candidate products is again representational. We must learn to represent (r, p) in a manner that can focus on the key difference between the reactants r and products p while also incorporating the necessary chemical contexts surrounding the changes. 1Molecules that do not typically contribute atoms to the product but are nevertheless necessary for the reaction to proceed. We again propose two alternative models to score each candidate pair (r, p). The first model naively represents a reaction by summing difference vectors of all atom representations obtained from a WLN on the associated connected components. Our second and improved model, called WLDN, takes into account higher order interactions between these differences vectors. WLN with Sum-Pooling Let c(pi)v be the learned atom representation of atom v in candidate product molecule pi. We define difference vector d (pi) v pertaining to atom v as follows: d(pi)v = c (pi) v c(r)v ; s(pi) = uT ⌧(M X v2pi d(pi)v ) (9) Recall that the reactants and products are atom-mapped so we can use v to refer to the same atom. The pooling operation is a simple sum over these difference vectors, resulting in a single vector for each (r, pi) pair. This vector is then fed into another neural network to score the candidate product pi. Weisfeiler-Lehman Difference Network (WLDN) Instead of simply summing all difference vectors, the WLDN operates on another graph called a difference graph. A difference graph D(r, pi) is defined as a molecular graph which has the same atoms and bonds as pi, with atom v’s feature vector replaced by d(pi)v . Operating on the difference graph has several benefits. First, in D(r, pi), atom v’s feature vector deviates from zero only if it is close to the reaction center, thus focusing the processing on the reaction center and its immediate context. Second, D(r, pi) explicates neighbor dependencies between difference vectors. The WLDN maps this graph-based representation into a fixed-length vector, by applying a separately parameterized WLN on top of D(r, pi): h(pi,l)v = ⌧ 0 @U1h(pi,l 1)v +U2 X u2N(v) ⌧ ⇣ V[h(pi,l 1)u , fuv] ⌘ 1 A (1 l L) (10) d(pi,L)v = X u2N(v) W(0)h(pi,L)u W(1)fuv W(2)h(pi,L)v (11) where h(pi,0)v = d (pi) v . The final score of pi is s(pi) = uT ⌧(M P v2pi d (pi,L) v ). Training Both models are trained to minimize the softmax log-likelihood objective over the scores {s(p0), s(p1), · · · , s(pm)} where s(p0) corresponds to the target. 4 Experiments Data As a source of data for our experiments, we used reactions from USPTO granted patents, collected by Lowe [13]. After removing duplicates and erroneous reactions, we obtained a set of 480K reactions, to which we refer in the paper as USPTO. This dataset is divided into 400K, 40K, and 40K for training, development, and testing purposes.2 In addition, for comparison purposes we report the results on the subset of 15K reaction from this dataset (referred as USPTO-15K) used by Coley et al. [3]. They selected this subset to include reactions covered by the 1.7K most common templates. We follow their split, with 10.5K, 1.5K, and 3K for training, development, and testing. Setup for Reaction Center Identification The output of this component consists of K atom pairs with the highest reactivity scores. We compute the coverage as the proportion of reactions where all atom pairs in the true reaction center are predicted by the model, i.e., where the recorded product is found in the model-generated candidate set. The model features reflect basic chemical properties of atoms and bonds. Atom-level features include its elemental identity, degree of connectivity, number of attached hydrogen atoms, implicit valence, and aromaticity. Bond-level features include bond type (single, double, triple, or aromatic), whether it is conjugated, and whether the bond is part of a ring. Both our local and global models are build upon a Weisfeiler-Lehman Network, with unrolled depth 3. All models are optimized with Adam [10], with learning rate decay factor 0.9. 2Code and data available at https://github.com/wengong-jin/nips17-rexgen Setup for Candidate Ranking The goal of this evaluation is to determine whether the model can select the correct product from a set of candidates derived from reaction center. We first compare model accuracy against the top-performing template-based approach by Coley et al. [3]. This approach employs frequency-based heuristics to construct reaction templates and then uses a neural model to rank the derived candidates. As explained above, due to the scalability issues associated with this baseline, we can only compare on USPTO-15K, which the authors restricted to contain only examples that were instantiated by their most popular templates. For this experiment, we set K = 8 for candidate generation, which achieves 90% coverage and yields 250 candidates per reaction. To compare a standard WLN representation against its counterpart with Difference Networks (WLDN), we train them under the same setup on USPTO-15K, fixing the number of parameters to 650K. Next, we evaluate our model on USPTO for large scale evaluation. We set K = 6 for candidate generation and report the result of the best model architecture. Finally, to factorize the coverage of candidate selection and the accuracy of candidate ranking, we consider two evaluation scenarios: (1) the candidate list as derived from reaction center; (2) the above candidate list augmented with the true product if not found. This latter setup is marked with (*). 4.1 Results Reaction Center Identification Table 1a reports the coverage of the model as compared to the real reaction core. Clearly, the coverage depends on the number of atom pairs K, with the higher coverage for larger values of K. These results demonstrate that even for K = 8, the model achieves high coverage, above 90%. The results also clearly demonstrate the advantage of the global model over the local one, which is consistent across all experiments. The superiority of the global model is in line with the well-known fact that reactivity depends on more than the immediate local environment surrounding the reaction center. The presence of certain functional groups (structural motifs that appear frequently in organic chemistry) far from the reaction center can promote or inhibit different modes of reactivity. Moreover, reactivity is often influenced by the presence of reagents, which are separate molecules that may not directly contribute atoms to the product. Consideration of both of these factors necessitates the use of a model that can account for long-range dependencies between atoms. Figure 3 depicts one such example, where the observed reactivity can be attributed to the presence of a reagent molecule that is completely disconnected from the reaction center itself. While the local model fails to anticipate this reactivity, the global one accurately predicts the reaction center. The attention map highlights the reagent molecule as the determinant context. Candidate Generation Here we compare the coverage of the generated candidates with the templatebased model. Table 1a shows that for K = 6, our model generates an average of 60.1 candidates and reaches a coverage of 89.8%. The template-based baseline requires 5006 templates extracted from the training data (corresponding to a minimum of five precedent reactions) to achieve 90.1% coverage with an average of 482 candidates per example. This weakness of the baseline model can be explained by the difficulty in defining general heuristics with which to extract templates from reaction examples. It is possible to define different levels of specificity based on the extent to which atoms surrounding the reaction center are included or generalized [11]. This introduces an unavoidable trade-off between generality (fewer templates, higher coverage, more candidates) and specificity (more templates, less coverage, fewer candidates). Figure 4a illustrates one reaction example where the corresponding template is rare due to the adjacency of the reaction center to both a carbonyl group and a phenyl ring. Because adjacency to either group can influence reactivity, both are included as part of the template, although reactivity in this case does not require the additional specification of the phenyl group. The massive number of templates required for high coverage is a serious impediment for the template approach because each template application requires solving a subgraph isomorphism problem. Specifically, it takes on average 7 seconds to apply the 5006 templates to a test instance, while our method takes less than 50 ms, about 140 times faster. Candidate Ranking Table 1b reports the performance on the product prediction task. Since the baseline templates from [3] were optimized on the test and have 100% coverage, we compare its performance against our models to which the correct product is added (WLN(*) and WLDN(*)). Our model clearly outperforms the baseline by a wide margin. Even when compared against the candidates automatically computed from the reaction center, WLDN outperforms the baseline in top-1 accuracy. The results also demonstrate that the WLDN model consistently outperforms the WLN model. This is consistent with our intuition that modeling higher order dependencies between the difference vectors is advantageous over simply summing over them. Table 1b also shows the model performance improves when tested on the full USPTO dataset. We further analyze model performance based on the frequency of the underlying transformation as reflected by the the number of template precedents. In Figure 4b we group the test instances according to their frequency and report the coverage of the global model and the mean reciprocal rank (MRR) of the WLDN model on each of them. As expected, our approach achieves the highest performance for frequent reactions. However, it maintains reasonable coverage and ranking accuracy even for rare reactions, which are particularly challenging for template-based methods. 4.2 Human Evaluation Study We randomly selected 80 reaction examples from the test set, ten from each of the template popularity intervals of Figure 4b, and asked ten chemists to predict the outcome of each given its reactants. The average accuracy across the ten performers was 48.2%. Our model achieves an accuracy of 69.1%, very close to the best individual performer who scored 72.0%. 5 Conclusion We proposed a novel template-free approach for chemical reaction prediction. Instead of generating candidate products by reaction templates, we first predict a small set of atoms/bonds in reaction center, and then produce candidate products by enumerating all possible bond configuration changes within the set. Compared to template based approach, our framework runs 140 times faster, allowing us to scale to much larger reaction databases. Both our reaction center identifier and candidate ranking model build from Weisfeiler-Lehman Network and its variants that learn compact representation of graphs and reactions. We hope our work will encourage both computer scientists and chemists to explore fully data driven approaches for this task. Acknowledgement We thank Tim Jamison, Darsh Shah, Karthik Narasimhan and the reviewers for their helpful comments. We also thank members of the MIT Department of Chemistry and Department of Chemical Engineering who participated in the human benchmarking study. This work was supported by the DARPA Make-It program under contract ARO W911NF-16-2-0023.
1. What is the main contribution of the paper in predicting organic chemical reactions? 2. How does the proposed approach differ from prior works, specifically Kayala et. al.? 3. What are the strengths and weaknesses of the paper regarding its similarity to other works and the details provided? 4. Do you have any questions regarding the neural network training details that were left out of the paper?
Review
Review A deep learning approach is proposed for the application of predicting organic chemical reactions. Given a set of reactants, the algorithm first predicts the likely reaction centers, and then ranks the possible reactions involving those atoms. This paper is well-organized, mostly clear, and makes a significant contribution. Chemical reaction prediction is an important application, and from a machine learning perspective this is a very interesting use of deep learning because of the unique structure of the data --- this paper nicely builds off recent work that uses neural networks to encode graph-structured data. My main comment is that this work is very similar to that of Kayala et. al. (presented at NIPS 2011), who propose a very similar two-step process in which neural networks first predict reaction centers and then rank the predicted products. A number of the details are different in this paper, including the prediction of reaction centers as pairs of atoms rather than separate predictions of electron sources and sinks, the encoded representation of the reaction centers, and the use of a difference graph for the ranking where Kayala et. al. use a Siamese neural network. It would be interesting if the authors could comment on the advantages or disadvantages of these differences. A number of details about the neural network training have been left out, presumably due to space restrictions, but should be included in the appendix at least. These include the network architectures, activation function, initialization, optimization hyperparameters, etc. Learning to Predict Chemical Reactions Matthew A. Kayala, Chloé-Agathe Azencott, Jonathan H. Chen, and Pierre Baldi Journal of Chemical Information and Modeling 2011 51 (9), 2209-2222
NIPS
Title Predicting Organic Reaction Outcomes with Weisfeiler-Lehman Network Abstract The prediction of organic reaction outcomes is a fundamental problem in computational chemistry. Since a reaction may involve hundreds of atoms, fully exploring the space of possible transformations is intractable. The current solution utilizes reaction templates to limit the space, but it suffers from coverage and efficiency issues. In this paper, we propose a template-free approach to efficiently explore the space of product molecules by first pinpointing the reaction center – the set of nodes and edges where graph edits occur. Since only a small number of atoms contribute to reaction center, we can directly enumerate candidate products. The generated candidates are scored by a Weisfeiler-Lehman Difference Network that models high-order interactions between changes occurring at nodes across the molecule. Our framework outperforms the top-performing template-based approach with a 10% margin, while running orders of magnitude faster. Finally, we demonstrate that the model accuracy rivals the performance of domain experts. 1 Introduction One of the fundamental problems in organic chemistry is the prediction of which products form as a result of a chemical reaction [16, 17]. While the products can be determined unambiguously for simple reactions, it is a major challenge for many complex organic reactions. Indeed, experimentation remains the primary manner in which reaction outcomes are analyzed. This is time consuming, expensive, and requires the help of an experienced chemist. The empirical approach is particularly limiting for the goal of automatically designing efficient reaction sequences that produce specific target molecule(s), a problem known as chemical retrosynthesis [16, 17]. Viewing molecules as labeled graphs over atoms, we propose to formulate the reaction prediction task as a graph transformation problem. A chemical reaction transforms input molecules (reactants) into new molecules (products) by performing a set of graph edits over reactant molecules, adding new edges and/or eliminating existing ones. Given that a typical reaction may involve more than 100 atoms, fully exploring all possible transformations is intractable. The computational challenge is how to reduce the space of possible edits effectively, and how to select the product from among the resulting candidates. The state-of-the-art solution is based on reaction templates (Figure 1). A reaction template specifies a molecular subgraph pattern to which it can be applied and the corresponding graph transformation. Since multiple templates can match a set of reactants, another model is trained to filter candidate products using standard supervised approaches. The key drawbacks of this approach are coverage and scalability. A large number of templates is required to ensure that at least one can reconstitute the correct product. The templates are currently either hand-crafted by experts [7, 1, 15] or generated from reaction databases with heuristic algorithms [2, 11, 3]. For example, Coley et al. [3] extracts 140K unique reaction templates from a database of 1 million reactions. Beyond coverage, applying a 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. template involves graph matching and this makes examining large numbers of templates prohibitively expensive. The current approach is therefore limited to small datasets with limited types of reactions. In this paper, we propose a template-free approach by learning to identify the reaction center, a small set of atoms/bonds that change from reactants to products. In our datasets, on average only 5.5% of the reactant molecules directly participate in the reaction. The small size of the reaction centers together with additional constraints on bond formations enables us to directly enumerate candidate products. Our forward-prediction approach is then divided into two key parts: (1) learning to identify reaction centers and (2) learning to rank the resulting enumerated candidate products. Our technical approach builds on neural embedding of the Weisfeiler-Lehman isomorphism test. We incorporate a specific attention mechanism to identify reaction centers while leveraging distal chemical effects not accounted for in related convolutional representations [5, 4]. Moreover, we propose a novel Weisfeiler-Lehman Difference Network to learn to represent and efficiently rank candidate transformations between reactants and products. We evaluate our method on two datasets derived from the USPTO [13], and compare our methods to the current top performing system [3]. Our method achieves 83.9% and 77.9% accuracy on two datasets, outperforming the baseline approach by 10%, while running 140 times faster. Finally, we demonstrate that the model outperforms domain experts by a large margin. 2 Related Work Template-based Approach Existing machine learning models for product prediction are mostly built on reaction templates. These approaches differ in the way templates are specified and in the way the final product is selected from multiple candidates. For instance, Wei et al. [18] learns to select among 16 pre-specified, hand-encoded templates, given fingerprints of reactants and reagents. While this work was developed on a narrow range of chemical reaction types, it is among the first implementations that demonstrates the potential of neural models for analyzing chemical reactions. More recent work has demonstrated the power of neural methods on a broader set of reactions. For instance, Segler and Waller [14] and Coley et al. [3] use a data-driven approach to obtain a large set of templates, and then employ a neural model to rank the candidates. The key difference between these approaches is the representation of the reaction. In Segler and Waller [14], molecules are represented based on their Morgan fingerprints, while Coley et al. [3] represents reactions by the features of atoms and bonds in the reaction center. However, the template-based architecture limits both of these methods in scaling up to larger datasets with more diversity. Template-free Approach Kayala et al. [8] also presented a template-free approach to predict reaction outcomes. Our approach differs from theirs in several ways. First, Kayala et al. operates at the mechanistic level - identifying elementary mechanistic steps rather than the overall transformations from reactants to products. Since most reactions consist of many mechanistic steps, their approach requires multiple predictions to fulfill an entire reaction. Our approach operates at the graph level - predicting transformations from reactants to products in a single step. Second, mechanistic descriptions of reactions are not given in existing reaction databases. Therefore, Kayala et al. created their training set based on a mechanistic-level template-driven expert system. In contrast, our model is learned directly from real-world experimental data. Third, Kayala et al. uses feed-forward neural networks where atoms and graphs are represented by molecular fingerprints and additional hand-crafted features. Our approach builds from graph neural networks to encode graph structures. Molecular Graph Neural Networks The question of molecular graph representation is a key issue in reaction modeling. In computational chemistry, molecules are often represented with Morgan Fingerprints, boolean vectors that reflect the presence of various substructures in a given molecule. Duvenaud et al. [5] developed a neural version of Morgan Fingerprints, where each convolution operation aggregates features of neighboring nodes as a replacement of the fixed hashing function. This representation was further expanded by Kearnes et al. [9] into graph convolution models. Dai et al. [4] consider a different architecture where a molecular graph is viewed as a latent variable graphical model. Their recurrent model is derived from Belief Propagation-like algorithms. Gilmer et al. [6] generalized all previous architectures into message-passing network, and applied them to quantum chemistry. The closest to our work is the Weisfeiler-Lehman Kernel Network proposed by Lei et al. [12]. This recurrent model is derived from the Weisfeiler-Lehman kernel that produces isomorphism-invariant representations of molecular graphs. In this paper, we further enhance this representation to capture graph transformations for reaction prediction. 3 Overview Our approach bypasses reaction templates by learning a reaction center identifier. Specifically, we train a neural network that operates on the reactant graph to predict a reactivity score for every pair of atoms (Section 3.1). A reaction center is then selected by picking a small number of atom pairs with the highest reactivity scores. After identifying the reaction center, we generate possible product candidates by enumerating possible bond configurations between atoms in the reaction center (Section 3.2) subject to chemical constraints. We train another neural network to rank these product candidates (represented as graphs, together with the reactants) so that the correct reaction outcome is ranked highest (Section 3.3). The overall pipeline is summarized in Figure 2. Before describing the two modules in detail, we formally define some key concepts used throughout the paper. Chemical Reaction A chemical reaction is a pair of molecular graphs (Gr, Gp), where Gr is called the reactants and Gp the products. A molecular graph is described as G = (V,E), where V = {a1, a2, · · · , an} is the set of atoms and E = {b1, b2, · · · , bm} is the set of associated bonds of varying types (single, double, aromatic, etc.). Note that Gr is has multiple connected components since there are multiple molecules comprising the reactants. The reactions used for training are atom-mapped so that each atom in the product graph has a unique corresponding atom in the reactants. Reaction Center A reaction center is a set of atom pairs {(ai, aj)}, where the bond type between ai and aj differs from Gr to Gp. In other words, a reaction center is a minimal set of graph edits needed to transform reactants to products. Since the reported reactions in the training set are atom-mapped, reaction centers can be identified automatically given the product. 3.1 Reaction Center Identification In a given reaction R = (Gr, Gp), each atom pair (au, av) in Gr is associated with a reactivity label yuv 2 {0, 1} specifying whether their relation differs between reactants and products. The label is determined by comparing Gr and Gp with the help of atom-mapping. We predict the label on the basis of learned atom representations that incorporate contextual cues from the surrounding chemical environment. In particular, we build on a Weisfeiler-Lehman Network (WLN) that has shown superior results against other learned graph representations in the narrower setting of predicting chemical properties of individual molecules [12]. 3.1.1 Weisfeiler-Lehman Network (WLN) The WLN is inspired by the Weisfeiler-Lehman isomorphism test for labeled graphs. The architecture is designed to embed the computations inherent in WL isomorphism testing to generate learned isomorphism-invariant representations for atoms. WL Isomorphism Test The key idea of the isomorphism test is to repeatedly augment node labels by the sorted set of node labels of neighbor nodes and to compress these augmented labels into new, short labels. The initial labeling is the atom element. In each iteration, its label is augmented with the element labels of its neighbors. Such a multi-set label is compactly represented as a new label by a hash function. Let c(L)v be the final label of atom av . The molecular graph G = (V,E) is represented as a set {(c(L)u , buv, c(L)v ) | (u, v) 2 E}, where buv is the bond type between u and v. Two graphs are said to be isomorphic if their set representations are the same. The number of distinct labels grows exponentially with the number of iterations L. WL Network The discrete relabeling process does not directly generalize to continuous feature vectors. Instead, we appeal to neural networks to continuously embed the computations inherent in the WL test. Let r be the analogous continuous relabeling function. Then a node v 2 G with neighbor nodes N(v), node features fv , and edge features fuv is “relabeled” according to r(v) = ⌧(U1fv +U2 X u2N(v) ⌧(V[fu, fuv])) (1) where ⌧(·) could be any non-linear function. We apply this relabeling operation iteratively to obtain context-dependent atom vectors h(l)v = ⌧(U1h (l 1) v +U2 X u2N(v) ⌧(V[h(l 1)u , fuv])) (1 l L) (2) where h(0)v = fv and U1,U2,V are shared across layers. The final atom representations arise from mimicking the set comparison function in the WL isomorphism test, yielding cv = X u2N(v) W(0)h(L)u W(1)fuv W(2)h(L)v (3) The set comparison here is realized by matching each rank-1 edge tensor h(L)u ⌦ fuv ⌦ h(L)v to a set of reference edges also cast as rank-1 tensors W(0)[k] ⌦ W(1)[k] ⌦ W(2)[k], where W[k] is the k-th row of matrix W. In other words, Eq. 3 above could be written as cv[k] = X u2N(v) D W(0)[k]⌦W(1)[k]⌦W(2)[k], h(L)u ⌦ fuv ⌦ h(L)v E (4) The resulting cv is a vector representation that captures the local chemical environment of the atom (through relabeling) and involves a comparison against a learned set of reference environments. The representation of the whole graph G is simply the sum over all the atom representations: cG = P v cv . 3.1.2 Finding Reaction Centers with WLN We present two models to predict reactivity: the local and global models. Our local model is based directly on the atom representations cu and cv in predicting label yuv . The global model, on the other hand, selectively incorporates distal chemical effects with the goal of capturing the fact that atoms outside of the reaction center may be necessary for the reaction to occur. For example, the reaction center may be influenced by certain reagents1. We incorporate these distal effects into the global model through an attention mechanism. Local Model Let cu, cv be the atom representations for atoms u and v, respectively, as returned by the WLN. We predict the reactivity score of (u, v) by passing these through another neural network: suv = uT ⌧(Macu +Macv +Mbbuv) (5) where (·) is the sigmoid function, and buv is an additional feature vector that encodes auxiliary information about the pair such as whether the two atoms are in different molecules or which type of bond connects them. Global Model Let ↵uv be the attention score of atom v on atom u. The global context representation ˜cu of atom u is calculated as the weighted sum of all reactant atoms where the weight comes from the attention module: ˜cu = X v ↵uvcv; ↵uv = uT ⌧(Pacu +Pacv +Pbbuv) (6) suv = uT ⌧(Ma˜cu +Ma˜cv +Mbbuv) (7) Note that the attention is obtained with sigmoid rather than softmax non-linearity since there may be multiple atoms relevant to a particular atom u. Training Both models are trained to minimize the following loss function: L(T ) = X R2T X u 6=v2R yuv log(suv) + (1 yuv) log(1 suv) (8) Here we predict each label independently because of the large number of variables. For a given reaction with N atoms, we need to predict the reactivity score of O(N2) pairs. This quadratic complexity prohibits us from adding higher-order dependencies between different pairs. Nonetheless, we found independent prediction yields sufficiently good performance. 3.2 Candidate Generation We select the top K atom pairs with the highest predicted reactivity score and designate them, collectively, as the reaction center. The set of candidate products are then obtained by enumerating all possible bond configuration changes within the set. While the resulting set of candidate products is exponential in K, many can be ruled out by invoking additional constraints. For example, every atom has a maximum number of neighbors they can connect to (valence constraint). We also leverage the statistical bias that reaction centers are very unlikely to consist of disconnected components (connectivity constraint). Some multi-step reactions do exist that violate the connectivity constraint. As we will show, the set of candidates arising from this procedure is more compact than those arising from templates without sacrificing coverage. 3.3 Candidate Ranking The training set for candidate ranking consists of lists T = {(r, p0, p1, · · · , pm)}, where r are the reactants, p0 is the known product, and p1, · · · , pm are other enumerated candidate products. The goal is to learn a scoring function that ranks the highest known product p0. The challenge in ranking candidate products is again representational. We must learn to represent (r, p) in a manner that can focus on the key difference between the reactants r and products p while also incorporating the necessary chemical contexts surrounding the changes. 1Molecules that do not typically contribute atoms to the product but are nevertheless necessary for the reaction to proceed. We again propose two alternative models to score each candidate pair (r, p). The first model naively represents a reaction by summing difference vectors of all atom representations obtained from a WLN on the associated connected components. Our second and improved model, called WLDN, takes into account higher order interactions between these differences vectors. WLN with Sum-Pooling Let c(pi)v be the learned atom representation of atom v in candidate product molecule pi. We define difference vector d (pi) v pertaining to atom v as follows: d(pi)v = c (pi) v c(r)v ; s(pi) = uT ⌧(M X v2pi d(pi)v ) (9) Recall that the reactants and products are atom-mapped so we can use v to refer to the same atom. The pooling operation is a simple sum over these difference vectors, resulting in a single vector for each (r, pi) pair. This vector is then fed into another neural network to score the candidate product pi. Weisfeiler-Lehman Difference Network (WLDN) Instead of simply summing all difference vectors, the WLDN operates on another graph called a difference graph. A difference graph D(r, pi) is defined as a molecular graph which has the same atoms and bonds as pi, with atom v’s feature vector replaced by d(pi)v . Operating on the difference graph has several benefits. First, in D(r, pi), atom v’s feature vector deviates from zero only if it is close to the reaction center, thus focusing the processing on the reaction center and its immediate context. Second, D(r, pi) explicates neighbor dependencies between difference vectors. The WLDN maps this graph-based representation into a fixed-length vector, by applying a separately parameterized WLN on top of D(r, pi): h(pi,l)v = ⌧ 0 @U1h(pi,l 1)v +U2 X u2N(v) ⌧ ⇣ V[h(pi,l 1)u , fuv] ⌘ 1 A (1 l L) (10) d(pi,L)v = X u2N(v) W(0)h(pi,L)u W(1)fuv W(2)h(pi,L)v (11) where h(pi,0)v = d (pi) v . The final score of pi is s(pi) = uT ⌧(M P v2pi d (pi,L) v ). Training Both models are trained to minimize the softmax log-likelihood objective over the scores {s(p0), s(p1), · · · , s(pm)} where s(p0) corresponds to the target. 4 Experiments Data As a source of data for our experiments, we used reactions from USPTO granted patents, collected by Lowe [13]. After removing duplicates and erroneous reactions, we obtained a set of 480K reactions, to which we refer in the paper as USPTO. This dataset is divided into 400K, 40K, and 40K for training, development, and testing purposes.2 In addition, for comparison purposes we report the results on the subset of 15K reaction from this dataset (referred as USPTO-15K) used by Coley et al. [3]. They selected this subset to include reactions covered by the 1.7K most common templates. We follow their split, with 10.5K, 1.5K, and 3K for training, development, and testing. Setup for Reaction Center Identification The output of this component consists of K atom pairs with the highest reactivity scores. We compute the coverage as the proportion of reactions where all atom pairs in the true reaction center are predicted by the model, i.e., where the recorded product is found in the model-generated candidate set. The model features reflect basic chemical properties of atoms and bonds. Atom-level features include its elemental identity, degree of connectivity, number of attached hydrogen atoms, implicit valence, and aromaticity. Bond-level features include bond type (single, double, triple, or aromatic), whether it is conjugated, and whether the bond is part of a ring. Both our local and global models are build upon a Weisfeiler-Lehman Network, with unrolled depth 3. All models are optimized with Adam [10], with learning rate decay factor 0.9. 2Code and data available at https://github.com/wengong-jin/nips17-rexgen Setup for Candidate Ranking The goal of this evaluation is to determine whether the model can select the correct product from a set of candidates derived from reaction center. We first compare model accuracy against the top-performing template-based approach by Coley et al. [3]. This approach employs frequency-based heuristics to construct reaction templates and then uses a neural model to rank the derived candidates. As explained above, due to the scalability issues associated with this baseline, we can only compare on USPTO-15K, which the authors restricted to contain only examples that were instantiated by their most popular templates. For this experiment, we set K = 8 for candidate generation, which achieves 90% coverage and yields 250 candidates per reaction. To compare a standard WLN representation against its counterpart with Difference Networks (WLDN), we train them under the same setup on USPTO-15K, fixing the number of parameters to 650K. Next, we evaluate our model on USPTO for large scale evaluation. We set K = 6 for candidate generation and report the result of the best model architecture. Finally, to factorize the coverage of candidate selection and the accuracy of candidate ranking, we consider two evaluation scenarios: (1) the candidate list as derived from reaction center; (2) the above candidate list augmented with the true product if not found. This latter setup is marked with (*). 4.1 Results Reaction Center Identification Table 1a reports the coverage of the model as compared to the real reaction core. Clearly, the coverage depends on the number of atom pairs K, with the higher coverage for larger values of K. These results demonstrate that even for K = 8, the model achieves high coverage, above 90%. The results also clearly demonstrate the advantage of the global model over the local one, which is consistent across all experiments. The superiority of the global model is in line with the well-known fact that reactivity depends on more than the immediate local environment surrounding the reaction center. The presence of certain functional groups (structural motifs that appear frequently in organic chemistry) far from the reaction center can promote or inhibit different modes of reactivity. Moreover, reactivity is often influenced by the presence of reagents, which are separate molecules that may not directly contribute atoms to the product. Consideration of both of these factors necessitates the use of a model that can account for long-range dependencies between atoms. Figure 3 depicts one such example, where the observed reactivity can be attributed to the presence of a reagent molecule that is completely disconnected from the reaction center itself. While the local model fails to anticipate this reactivity, the global one accurately predicts the reaction center. The attention map highlights the reagent molecule as the determinant context. Candidate Generation Here we compare the coverage of the generated candidates with the templatebased model. Table 1a shows that for K = 6, our model generates an average of 60.1 candidates and reaches a coverage of 89.8%. The template-based baseline requires 5006 templates extracted from the training data (corresponding to a minimum of five precedent reactions) to achieve 90.1% coverage with an average of 482 candidates per example. This weakness of the baseline model can be explained by the difficulty in defining general heuristics with which to extract templates from reaction examples. It is possible to define different levels of specificity based on the extent to which atoms surrounding the reaction center are included or generalized [11]. This introduces an unavoidable trade-off between generality (fewer templates, higher coverage, more candidates) and specificity (more templates, less coverage, fewer candidates). Figure 4a illustrates one reaction example where the corresponding template is rare due to the adjacency of the reaction center to both a carbonyl group and a phenyl ring. Because adjacency to either group can influence reactivity, both are included as part of the template, although reactivity in this case does not require the additional specification of the phenyl group. The massive number of templates required for high coverage is a serious impediment for the template approach because each template application requires solving a subgraph isomorphism problem. Specifically, it takes on average 7 seconds to apply the 5006 templates to a test instance, while our method takes less than 50 ms, about 140 times faster. Candidate Ranking Table 1b reports the performance on the product prediction task. Since the baseline templates from [3] were optimized on the test and have 100% coverage, we compare its performance against our models to which the correct product is added (WLN(*) and WLDN(*)). Our model clearly outperforms the baseline by a wide margin. Even when compared against the candidates automatically computed from the reaction center, WLDN outperforms the baseline in top-1 accuracy. The results also demonstrate that the WLDN model consistently outperforms the WLN model. This is consistent with our intuition that modeling higher order dependencies between the difference vectors is advantageous over simply summing over them. Table 1b also shows the model performance improves when tested on the full USPTO dataset. We further analyze model performance based on the frequency of the underlying transformation as reflected by the the number of template precedents. In Figure 4b we group the test instances according to their frequency and report the coverage of the global model and the mean reciprocal rank (MRR) of the WLDN model on each of them. As expected, our approach achieves the highest performance for frequent reactions. However, it maintains reasonable coverage and ranking accuracy even for rare reactions, which are particularly challenging for template-based methods. 4.2 Human Evaluation Study We randomly selected 80 reaction examples from the test set, ten from each of the template popularity intervals of Figure 4b, and asked ten chemists to predict the outcome of each given its reactants. The average accuracy across the ten performers was 48.2%. Our model achieves an accuracy of 69.1%, very close to the best individual performer who scored 72.0%. 5 Conclusion We proposed a novel template-free approach for chemical reaction prediction. Instead of generating candidate products by reaction templates, we first predict a small set of atoms/bonds in reaction center, and then produce candidate products by enumerating all possible bond configuration changes within the set. Compared to template based approach, our framework runs 140 times faster, allowing us to scale to much larger reaction databases. Both our reaction center identifier and candidate ranking model build from Weisfeiler-Lehman Network and its variants that learn compact representation of graphs and reactions. We hope our work will encourage both computer scientists and chemists to explore fully data driven approaches for this task. Acknowledgement We thank Tim Jamison, Darsh Shah, Karthik Narasimhan and the reviewers for their helpful comments. We also thank members of the MIT Department of Chemistry and Department of Chemical Engineering who participated in the human benchmarking study. This work was supported by the DARPA Make-It program under contract ARO W911NF-16-2-0023.
1. What is the main contribution of the paper in the field of organic chemical reactions? 2. How does the proposed method differ from previous approaches in terms of its template-free nature? 3. Can you explain how the three-step pipeline works, including identifying reaction centers, generating candidate products, and ranking them using Weisfeiler-Lehman networks? 4. How did the proposed method perform compared to existing state-of-the-art methods and human chemist experts? 5. Are there any potential limitations or areas for improvement regarding the proposed method?
Review
Review Summary: This work provides a novel approach to predict the outcome of organic chemical reactions. A reaction can be computationally regarded as graph-prediction problem: given the input of several connected graphs (molecules), the model aims to predict a fully-connected graph (reaction product) that can be obtained by performing several graph edits (reaction) on some edges and nodes (reaction center) in the input graphs. Past reaction predictions involving exhaustively enumeration of reaction centers and fitting them to a large number of existing reaction templates, which is very inefficient and hard to scale. In this work, the author proposed a template-free method to predict the outcome. It is a 3 step pipeline: 1) identify the reaction center given the input graphs using a Weisfeiler-Lehman Network. 2) generate candidate products based their reactivity score and chemical constraints. 3) rank the candidate products using a Weisfeiler-Lehman Difference Network. The proposed method outperformed an existing state-of-art method on a benchmark chemistry dataset in both accuracy (10% rise) and efficiency (140 times faster), and also outperformed human chemist experts. Qualitative Evaluation: Quality: The work is technically sound. The proposed method is well supported by experiments in both real world dataset and human expert comparisons. Clarity: This work describes their work and methods clearly. The experiments are introduced in details. Originality: This work provides a novel solution for reaction outcome prediction, which does not need prior knowledge of reaction templates. The author may want to relate some past NIPS work on computational chemistry to their work: Kayala, Matthew A., and Pierre F. Baldi. "A machine learning approach to predict chemical reactions." Advances in Neural Information Processing Systems. 2011. Significance: The work outperforms the state-of-art of reaction product prediction in both accuracy and efficiency. The user study experiment shows that it also outperforms human experts.
NIPS
Title A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models Abstract Generative models produce realistic objects in many domains, including text, image, video, and audio synthesis. Most popular models—Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)—usually employ a standard Gaussian distribution as a prior. Previous works show that the richer family of prior distributions may help to avoid the mode collapse problem in GANs and to improve the evidence lower bound in VAEs. We propose a new family of prior distributions—Tensor Ring Induced Prior (TRIP)—that packs an exponential number of Gaussians into a high-dimensional lattice with a relatively small number of parameters. We show that these priors improve Fréchet Inception Distance for GANs and Evidence Lower Bound for VAEs. We also study generative models with TRIP in the conditional generation setup with missing conditions. Altogether, we propose a novel plug-and-play framework for generative models that can be utilized in any GAN and VAE-like architectures. 1 Introduction Modern generative models are widely applied to the generation of realistic and diverse images, text, and audio files [1–5]. Generative Adversarial Networks (GAN) [6], Variational Autoencoders (VAE) [7], and their variations are the most commonly used neural generative models. Both architectures learn a mapping from some prior distribution p(z)—usually a standard Gaussian—to the data distribution p(x). Previous works showed that richer prior distributions might improve the generative models—reduce mode collapse for GANs [8, 9] and obtain a tighter Evidence Lower Bound (ELBO) for VAEs [10]. If the prior p(z) lies in a parametric family, we can learn the most suitable distribution for it during training. In this work, we investigate Gaussian Mixture Models as prior distributions with an exponential number of Gaussians in nodes of a multidimensional lattice. In our experiments, we used a prior with more than a googol (10100) Gaussians. To handle such complex distributions, we represent p(z) using a Tensor Ring decomposition [11]—a method for approximating highdimensional tensors with a relatively small number of parameters. We call this family of distributions a Tensor Ring Induced Prior (TRIP). For this distribution, we can compute marginal and conditional probabilities and sample from them efficiently. We also extend TRIP to conditional generation, where a generative model p(x | y) produces new objects x with specified attributes y. With TRIP, we can produce new objects conditioned only on a subset of attributes, leaving some labels unspecified during both training and inference. Our main contributions are summarized as follows: ∗equal contribution 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. • We introduce a family of distributions that we call a Tensor Ring Induced Prior (TRIP) and use it as a prior for generative models—VAE, GAN, and its variations. • We investigate an application of TRIP to conditional generation and show that this prior improves quality on sparsely labeled datasets. • We evaluate TRIP models on the generation of CelebA faces for both conditional and unconditional setups. For GANs, we show improvement in Fréchet Inception Distance (FID) and improved ELBO for VAEs. For the conditional generation, we show lower rates of condition violation compared to standard conditional models. 2 Tensor Ring Induced Prior In this section, we introduce a Tensor Ring-induced distribution for both discrete and continuous variables. We also define a Tensor Ring Induced Prior (TRIP) family of distributions. 2.1 Tensor Ring decomposition Tensor Ring decomposition [11] represents large high-dimensional tensors (such as discrete distributions) with a relatively small number of parameters. Consider a joint distribution p(r1, r2, . . . rd) of d discrete random variables rk taking values from {0, 1, . . . Nk − 1}. We write these probabilities as elements of a d-dimensional tensor P [r1, r2, . . . rd] = p(r1, r2, . . . rd). For the brevity of notation, we use r1:d for (r1, . . . , rd). The number of elements in this tensor grows exponentially with the number of dimensions d, and for only 50 binary variables the tensor contains 250 ≈ 1015 real numbers. Tensor Ring decomposition reduces the number of parameters by approximating tensor P with low-rank non-negative tensors cores Qk ∈ R Nk×mk×mk+1 + , where m1, . . . ,md+1 are core sizes, and md+1 = m1: p(r1:d) ∝ P̂ [r1:d] = Tr ( d∏ j=1 Qj [rj ] ) (1) To compute P [r1:d], for each random variable rk, we slice a tensor Qk along the first dimension and obtain a matrix Qk[rk] ∈ R mk×mk+1 + . We multiply these matrices for all random variables and compute the trace of the resulting matrix to get a scalar (see Figure 1(b) for an example). In Tensor Ring decomposition, the number of parameters grows linearly with the number of dimensions. With larger core sizes mk, Tensor Ring decomposition can approximate more complex distributions. Note that the order of the variables matters: Tensor Ring decomposition better captures dependencies between closer variables than between the distant ones. With Tensor Ring decomposition, we can compute marginal distributions without computing the whole tensor P̂ [r1:d]. To marginalize out the random variable rk, we replace cores Qk in Eq 1 with matrix Q̃k = ∑Nk−1 rk=0 Qk[rk]: p(r1:k−1, rk+1:d) ∝ P̂ [r1:k−1, rk+1:d] = Tr ( k−1∏ j=1 Qj [rj ] · Q̃k · d∏ j=k+1 Qj [rj ] ) (2) In Supplementary Materials, we show an Algorithm for computing marginal distributions. We can also compute conditionals as a ratio between the joint and marginal probabilities p(A | B) = p(A,B)/p(B); we sample from conditional or marginal distributions using the chain rule. 2.2 Continuous Distributions parameterized with Tensor Ring Decomposition In this section, we apply the Tensor Ring decomposition to continuous distributions over vectors z = [z1, . . . , zd]. In our Learnable Prior model, we assume that each component of zk is a Gaussian Mixture Model withNk fully factorized components. The joint distribution p(z) is a multidimensional Gaussian Mixture Model with modes placed in the nodes of a multidimensional lattice (Figure 1(a)). The latent discrete variables s1, . . . , sd indicate the index of mixture component for each dimension (sk corresponds to the k-th dimension of the latent code zk): p(z1:d) = ∑ s1:d p(s1:d)p(z1:d | s1:d) ∝ ∑ s1:d P̂ [s1:d] d∏ j=1 N (zj | µ sj j , σ sj j ) (3) Here, p(s) is a discrete distribution of prior probabilities of mixture components, which we store as a tensor P̂ [s] in a Tensor Ring decomposition. Note that p(s) is not a factorized distribution, and the learnable prior p(z) may learn complex weightings of the mixture components. We call the family of distributions parameterized in this form a Tensor Ring Induced Prior (TRIP) and denote its learnable parameters (cores, means, and standard deviations) as ψ: ψ = { Q1, . . . , Qd, µ 0 1, . . . , µ Nd−1 d , σ 0 1 , . . . , σ Nd−1 d } . (4) To highlight that the prior distribution is learnable, we further write it as pψ(z). As we show later, we can optimize ψ directly using gradient descent for VAE models and REINFORCE [12] for GANs. An important property of the proposed TRIP family is that we can derive its one-dimensional conditional distributions in a closed form. For example, to sample using a chain rule, we need distributions pψ(zk | z1:k−1): pψ(zk | z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk, z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk) = Nk−1∑ sk=0 pψ(sk | z1:k−1)N (zk | µskk , σ sk k ) (5) From Equation 5 we notice that one-dimensional conditional distributions are Gaussian Mixture Models with the same means and variances as priors, but with different weights pψ(sk | z1:k−1) (see Supplementary Materials). Computations for marginal probabilities in the general case are shown in Algorithm 1; conditional probabilities can be computed as a ratio between the joint and marginal probabilities. Note that we compute a normalizing constant on-the-fly. 3 Generative Models With Tensor Ring Induced Prior In this section, we describe how popular generative models—Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs)—can benefit from using Tensor Ring Induced Prior. 3.1 Variational Autoencoder Variational Autoencoder (VAE) [7, 13] is an autoencoder-based generative model that maps data points x onto a latent space with a probabilistic encoder qφ(z | x) and reconstructs objects with a probabilistic decoder pθ(x | z). We used a Gaussian encoder with the reparameterization trick: qφ(z | x) = N (z | µφ(x), σφ(x)) = N ( | 0, I) · σφ(x) + µφ(x). (6) Algorithm 1 Calculation of marginal probabilities in TRIP Input: A set M of variable indices for which we compute the probability, and values of these latent codes zi for i ∈M Output: Joint probability log p(zM ), where zM = {zi ∀i ∈M} Initialize Qbuff = I ∈ Rm1×m1 , Qnorm = I ∈ Rm1×m1 for j = 1 to d do if j is marginalized out (j /∈M ) then Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] ) else Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] · N ( zk | µ sj j , σ sj j )) end if Qnorm = Qnorm · (∑Nj−1 k=0 Qj [k] ) end for log p(zM ) = log Tr (Qbuff)− log Tr (Qnorm) The most common choice for a prior distribution pψ(z) in the latent space is a standard Gaussian distributionN (0, I). VAEs are trained by maximizing the lower bound of the log marginal likelihood log p(x), also known as the Evidence Lower Bound (ELBO): L(θ, φ, ψ) = Eqφ(z|x)log pθ(x | z)−KL ( qφ(z | x) || pψ(z) ) , (7) where KL is a Kullback-Leibler divergence. We get an unbiased estimate of L(θ, φ, ψ) by sampling i ∼ N (0, I) and computing a Monte Carlo estimate L(θ, φ, ψ) ≈ 1 l l∑ i=1 log ( pθ(x | zi)pψ(zi) qφ(zi | x) ) , zi = i · σφ(x) + µφ(x) (8) When pψ(z) is a standard Gaussian, the KL term can be computed analytically, reducing the estimation variance. For VAEs, flexible priors give tighter evidence lower bound [10, 14] and can help with a problem of the decoder ignoring the latent codes [14, 15]. In this work, we parameterize the learnable prior pψ(z) as a Tensor Ring Induced Prior model and train its parameters ψ jointly with encoder and decoder (Figure 2). We call this model a Variational Autoencoder with Tensor Ring Induced Prior (VAE-TRIP). We initialize the means and the variances by fitting 1D Gaussian Mixture models for each component using samples from the latent codes and initialize cores with a Gaussian noise. We then re-initialize means, variances and cores after the first epoch, and repeat such procedure every 5 epochs. 3.2 Generative Adversarial Networks Generative Adversarial Networks (GANs) [6] consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects produced by a generator. The generator, on the other hand, is trying to produce objects that the discriminator considers real. The optimization setup for all models from the GAN family is a min-max problem. For the standard GAN, the learning procedure alternates between optimizing the generator and the discriminator networks with a gradient descent/ascent: min G,ψ max D LGAN = Ex∼p(x) logD(x) + Ez∼pψ(z) log ( 1−D ( G(z) )) (9) Similar to VAE, the prior distribution pψ(z) is usually a standard Gaussian, although Gaussian Mixture Models were also previously studied [16]. In this work, we use a TRIP family of distributions to parameterize a multimodal prior of GANs (Figure 3). We expect that having multiple modes as the prior improves the overall quality of generation and helps to avoid anomalies during sampling, such as partially present eyeglasses. During training, we sample multiple latent codes from the prior pψ(z) and use REINFORCE [12] to propagate the gradient through the parameters ψ. We reduce the variance by using average discriminator output as a baseline: ∇ψLGAN ≈ 1 l l∑ i=1 ∇ψ log pψ(zi) di − 1 l l∑ j=1 dj , (10) where di = log ( 1−D ( G(z) )) is the discriminator’s output and zi are samples from the prior pψ(z). We call this model a Generative Adversarial Network with Tensor Ring Induced Prior (GAN-TRIP). We initialize means uniformly in a range [−1, 1] and standard deviations as 1/Nk. 4 Conditional Generation In conditional generation problem, data objects x (for example, face images) are coupled with properties y describing the objects (for example, sex and hair color). The goal of this model is to learn a distribution p(x | y) that produces objects with specified attributes. Some of the attributes y for a given x may be unknown (yun), and the model should learn solely from observed attributes (yob): p(x | yob). For VAE-TRIP, we train a joint model pψ(z, y) on all attributes y and latent codes z parameterized with a Tensor Ring. For discrete conditions, the joint distribution is: p(z, y) = ∑ s1:d P̃ [s1:d, y] d∏ j=1 N (zd | µsdd , σ sd d ), (11) where tensor P̃ [s1:d, y] is represented in a Tensor Ring decomposition. In this work, we focus on discrete attributes, although we can extend the model to continuous attributes with Gaussian Mixture Models as we did for the latent codes. With the proposed parameterization, we can marginalize out missing attributes and compute conditional probabilities. We can efficiently compute both probabilities similar to Algorithm 1. For conditional VAE model, the lower bound on log p(x, yob) is: L̃(θ, φ, ψ) = Eqφ(z|x,yob) log pθ(x, yob | z)−KL ( qφ(z | x, yob) || pψ(z) ) . (12) We simplify the lower bound by making two restrictions. First, we assume that the conditions y are fully defined by the object x, which implies qφ(z | x, yob) = qφ(z | x). For example, an image with a person wearing a hat defines the presence of a hat. The second restriction is that we can reconstruct an object directly from its latent code: pθ(x | z, yob) = pθ(x | z). This restriction also gives: pθ(x, yob | z) = pθ(x | z, yob)pψ(yob | z) = pθ(x | z)pψ(yob | z). (13) The resulting Evidence Lower Bound is L̃(θ, φ, ψ) = Eqφ(z|x) [ log pθ(x | z) + log pψ(yob | z) ] −KL ( qφ(z | x) || pψ(z) ) . (14) In the proposed model, an autoencoder learns to map objects onto a latent manifolds, while TRIP prior log pψ(z | yob) finds areas on the manifold corresponding to objects with the specified attributes. The quality of the model depends on the order of the latent codes and the conditions in pψ(z, y), since the Tensor Ring poorly captures dependence between variables that are far apart. In our experiments, we found that randomly permuting latent codes and conditions gives good results. We can train the proposed model on partially labeled datasets and use it to draw conditional samples with partially specified constraints. For example, we can ask the model to generate images of men in hats, not specifying hair color or the presence of glasses. 5 Related Work The most common generative models are based on Generative Adversarial Networks [6] or Variational Autoencoders [7]. Both GAN and VAE models usually use continuous unimodal distributions (like a standard Gaussian) as a prior. A space of natural images, however, is multimodal: a person either wears glasses or not—there are no intermediate states. Although generative models are flexible enough to transform unimodal distributions to multimodal, they tend to ignore some modes (mode collapse) or produce images with artifacts (half-present glasses). A few models with learnable prior distributions were proposed. Tomczak and Welling [10] used a Gaussian mixture model based on encoder proposals as a prior on the latent space of VAE. Chen et al. [14] and Rezende and Mohamed [17] applied normalizing flows [18–20] to transform a standard normal prior into a more complex latent distribution. [14, 15] applied auto-regressive models to learn better prior distribution over the latent variables. [21] proposed to update a prior distribution of a trained VAE to avoid samples that have low marginal posterior, but high prior probability. Similar to Tensor Ring decomposition, a Tensor-Train decomposition [22] is used in machine learning and numerical methods to represent tensors with a small number of parameters. Tensor-Train was applied to the compression of fully connected [23], convolutional [24] and recurrent [25] layers. In our models, we can use a Tensor-Train decomposition instead of Tensor Ring, but it requires larger cores to achieve comparable results, as first and last dimensions are farther apart. Most conditional models work with missing values by imputing them with a predictive model or setting them to a special value. With this approach, we cannot sample objects specifying conditions partially. VAE TELBO model [26] proposes to train a Product of Experts-based model, where the posterior on the latent codes is approximated as pψ(z | yob) = ∏ yi∈yob pψ(z | yi), requiring to train a separate posterior model for each condition. JMVAE model [27] contains three encoders that take both image and condition, only a condition, or only an image. 6 Experiments We conducted experiments on CelebFaces Attributes Dataset (CelebA) [28] of approximately 400,000 photos with a random train-test split. For conditional generation, we selected 14 binary image attributes, including sex, hair color, presence mustache, and beard. We compared both GAN and VAE models with and without TRIP. We also compared our best model with known approaches on CIFAR-10 [29] dataset with a standard split. Model architecture and training details are provided in Supplementary Materials. 6.1 Generating Objects With VAE-TRIP and GAN-TRIP We evaluate GAN-based models with and without Tensor Ring Learnable Prior by measuring a Fréchet Inception Distance (FID). For the baseline models, we used Wasserstein GAN (WGAN) [31] and Wasserstein GAN with Gradient Penalty (WGAN-GP) [32] on CelebA dataset. We also compared learnable priors with fixed randomly initialized parameters ψ. The results in Table 1 (CelebA) and Table 2 (CIFAR-10) suggest that with a TRIP prior the quality improves compared to standard models and models with GMM priors. In some experiments, the GMM-based model performed worse than a standard Gaussian, sinceKL had to be estimated with Monte-Carlo sampling, resulting in higher gradient variance. 6.2 Visualization of TRIP In Figure 4, we visualize first two dimensions of the learned prior pψ(z1, z2) in VAE-TRIP and WGAN-GP-TRIP models. For both models, prior uses most of the components to produce a complex distribution. Also, notice that the components learned different non-uniform weights. 6.3 Generated Images Here, we visualize the correspondence of modes and generated images by a procedure that we call mode hopping. We start by randomly sampling a latent code and producing the first image. After that, we randomly select five dimensions and sample them conditioned on the remaining dimensions. We repeat this procedure multiple times and obtain a sequence of sampled images shown in Figure 5. With these results, we see that similar images are localized in the learned prior space, and changes in a few dimensions change only a few fine-grained features. 6.4 Generated Conditional Images In this experiment, we generate images given a subset of attributes to estimate the diversity of generated images. For example, if we specify ‘Young man,’ we would expect different images to have different hair colors, presence and absence of glasses or hat. Generated images shown in Figure 3 indicate that the model learned to produce diverse images with multiple varying attributes. 7 Discussion We designed our prior utilizing Tensor Ring decomposition due to its higher representation capacity compared to other decompositions. For example, a Tensor Ring with core size m has the same capacity as a Tensor-Train with core size m2 [35]. Although the prior contains an exponential number of modes, in our experiments, its learnable parameters accounted for less than 10% of total weights, which did not cause overfitting. The results can be improved by increasing the core size m; however, the computational complexity has a cubic growth with the core size. We also implemented a conditional GAN but found the REINFORCE-based training of this model very unstable. Further research with variance reduction techniques might improve this approach. 8 Acknowledgements Image generation for Section 6.3 was supported by the Russian Science Foundation grant no. 17-71- 20072.
1. What is the focus and contribution of the paper on generative models? 2. What are the strengths of the proposed method, particularly in terms of efficiency and novelty? 3. What are the weaknesses of the paper regarding its comparisons with other works and experimental designs? 4. How does the reviewer assess the clarity, quality, originality, significance, and impact of the paper's content? 5. What are some minor comments and questions raised by the reviewer regarding the paper's content and experiments?
Review
Review I have read the author response and other reviews and decided to keep my original score of 7. Summary: The paper proposes a family of priors for GANs and VAEs. These priors are mixtures of Gaussians with a large number of components but which can be represented using few number of learnable parameters using tensor ring decomposition. This family of priors enable efficient marginalization and conditioning. The method is applicable to both discrete and continuous latent variables. The method is extended to conditional generative modeling; in particular missing values in the conditioning variable can be marginalized out. Experiments are conducted on CelebA and Cifar10. Originality: The proposed method is novel to my knowledge. Clarity and Quality: The paper is very well written and easy to follow. The experiments are somewhat satisfying. I would have liked to see comparison to works using richer priors. For example comparison to the VampPrior [1] for the VAE experiment would be useful. Furthermore, it is not clear whether the TRIP outperforms the GMM baseline solely because it has higher capacity. For example in the appendix it is mentioned that for the GMM the number of components used is 1000; I was expecting 128*10 number of components (128 dimensional latents with 10 gaussians for each dimension). See section 3 of the supplement. Significance: For the VAE, I would deem this work significant if it was shown that this has the possibility to also help with latent variable collapse. For the GAN I would deem this work less significant as it relies on REINFORCE which is somewhat problematic due to high variance (this is rightfully acknowledged in the paper). Questions and Minor Comments: (1) What happens when you use this approach to form the variational distribution in the VAE? (2) line 100: it is "log marginal likelihood" not "marginal log-likelihood" (3) For the GAN did you also use multiple samples from the prior as a GAN baseline? (4) Why not use 1280=128*10 for the GMM baseline in the gan model? That would be more fair to the baseline. (5) How do you select the core size m_k? [1] VAE with a VampPrior. Tomczak and Max Welling, 2018.
NIPS
Title A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models Abstract Generative models produce realistic objects in many domains, including text, image, video, and audio synthesis. Most popular models—Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)—usually employ a standard Gaussian distribution as a prior. Previous works show that the richer family of prior distributions may help to avoid the mode collapse problem in GANs and to improve the evidence lower bound in VAEs. We propose a new family of prior distributions—Tensor Ring Induced Prior (TRIP)—that packs an exponential number of Gaussians into a high-dimensional lattice with a relatively small number of parameters. We show that these priors improve Fréchet Inception Distance for GANs and Evidence Lower Bound for VAEs. We also study generative models with TRIP in the conditional generation setup with missing conditions. Altogether, we propose a novel plug-and-play framework for generative models that can be utilized in any GAN and VAE-like architectures. 1 Introduction Modern generative models are widely applied to the generation of realistic and diverse images, text, and audio files [1–5]. Generative Adversarial Networks (GAN) [6], Variational Autoencoders (VAE) [7], and their variations are the most commonly used neural generative models. Both architectures learn a mapping from some prior distribution p(z)—usually a standard Gaussian—to the data distribution p(x). Previous works showed that richer prior distributions might improve the generative models—reduce mode collapse for GANs [8, 9] and obtain a tighter Evidence Lower Bound (ELBO) for VAEs [10]. If the prior p(z) lies in a parametric family, we can learn the most suitable distribution for it during training. In this work, we investigate Gaussian Mixture Models as prior distributions with an exponential number of Gaussians in nodes of a multidimensional lattice. In our experiments, we used a prior with more than a googol (10100) Gaussians. To handle such complex distributions, we represent p(z) using a Tensor Ring decomposition [11]—a method for approximating highdimensional tensors with a relatively small number of parameters. We call this family of distributions a Tensor Ring Induced Prior (TRIP). For this distribution, we can compute marginal and conditional probabilities and sample from them efficiently. We also extend TRIP to conditional generation, where a generative model p(x | y) produces new objects x with specified attributes y. With TRIP, we can produce new objects conditioned only on a subset of attributes, leaving some labels unspecified during both training and inference. Our main contributions are summarized as follows: ∗equal contribution 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. • We introduce a family of distributions that we call a Tensor Ring Induced Prior (TRIP) and use it as a prior for generative models—VAE, GAN, and its variations. • We investigate an application of TRIP to conditional generation and show that this prior improves quality on sparsely labeled datasets. • We evaluate TRIP models on the generation of CelebA faces for both conditional and unconditional setups. For GANs, we show improvement in Fréchet Inception Distance (FID) and improved ELBO for VAEs. For the conditional generation, we show lower rates of condition violation compared to standard conditional models. 2 Tensor Ring Induced Prior In this section, we introduce a Tensor Ring-induced distribution for both discrete and continuous variables. We also define a Tensor Ring Induced Prior (TRIP) family of distributions. 2.1 Tensor Ring decomposition Tensor Ring decomposition [11] represents large high-dimensional tensors (such as discrete distributions) with a relatively small number of parameters. Consider a joint distribution p(r1, r2, . . . rd) of d discrete random variables rk taking values from {0, 1, . . . Nk − 1}. We write these probabilities as elements of a d-dimensional tensor P [r1, r2, . . . rd] = p(r1, r2, . . . rd). For the brevity of notation, we use r1:d for (r1, . . . , rd). The number of elements in this tensor grows exponentially with the number of dimensions d, and for only 50 binary variables the tensor contains 250 ≈ 1015 real numbers. Tensor Ring decomposition reduces the number of parameters by approximating tensor P with low-rank non-negative tensors cores Qk ∈ R Nk×mk×mk+1 + , where m1, . . . ,md+1 are core sizes, and md+1 = m1: p(r1:d) ∝ P̂ [r1:d] = Tr ( d∏ j=1 Qj [rj ] ) (1) To compute P [r1:d], for each random variable rk, we slice a tensor Qk along the first dimension and obtain a matrix Qk[rk] ∈ R mk×mk+1 + . We multiply these matrices for all random variables and compute the trace of the resulting matrix to get a scalar (see Figure 1(b) for an example). In Tensor Ring decomposition, the number of parameters grows linearly with the number of dimensions. With larger core sizes mk, Tensor Ring decomposition can approximate more complex distributions. Note that the order of the variables matters: Tensor Ring decomposition better captures dependencies between closer variables than between the distant ones. With Tensor Ring decomposition, we can compute marginal distributions without computing the whole tensor P̂ [r1:d]. To marginalize out the random variable rk, we replace cores Qk in Eq 1 with matrix Q̃k = ∑Nk−1 rk=0 Qk[rk]: p(r1:k−1, rk+1:d) ∝ P̂ [r1:k−1, rk+1:d] = Tr ( k−1∏ j=1 Qj [rj ] · Q̃k · d∏ j=k+1 Qj [rj ] ) (2) In Supplementary Materials, we show an Algorithm for computing marginal distributions. We can also compute conditionals as a ratio between the joint and marginal probabilities p(A | B) = p(A,B)/p(B); we sample from conditional or marginal distributions using the chain rule. 2.2 Continuous Distributions parameterized with Tensor Ring Decomposition In this section, we apply the Tensor Ring decomposition to continuous distributions over vectors z = [z1, . . . , zd]. In our Learnable Prior model, we assume that each component of zk is a Gaussian Mixture Model withNk fully factorized components. The joint distribution p(z) is a multidimensional Gaussian Mixture Model with modes placed in the nodes of a multidimensional lattice (Figure 1(a)). The latent discrete variables s1, . . . , sd indicate the index of mixture component for each dimension (sk corresponds to the k-th dimension of the latent code zk): p(z1:d) = ∑ s1:d p(s1:d)p(z1:d | s1:d) ∝ ∑ s1:d P̂ [s1:d] d∏ j=1 N (zj | µ sj j , σ sj j ) (3) Here, p(s) is a discrete distribution of prior probabilities of mixture components, which we store as a tensor P̂ [s] in a Tensor Ring decomposition. Note that p(s) is not a factorized distribution, and the learnable prior p(z) may learn complex weightings of the mixture components. We call the family of distributions parameterized in this form a Tensor Ring Induced Prior (TRIP) and denote its learnable parameters (cores, means, and standard deviations) as ψ: ψ = { Q1, . . . , Qd, µ 0 1, . . . , µ Nd−1 d , σ 0 1 , . . . , σ Nd−1 d } . (4) To highlight that the prior distribution is learnable, we further write it as pψ(z). As we show later, we can optimize ψ directly using gradient descent for VAE models and REINFORCE [12] for GANs. An important property of the proposed TRIP family is that we can derive its one-dimensional conditional distributions in a closed form. For example, to sample using a chain rule, we need distributions pψ(zk | z1:k−1): pψ(zk | z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk, z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk) = Nk−1∑ sk=0 pψ(sk | z1:k−1)N (zk | µskk , σ sk k ) (5) From Equation 5 we notice that one-dimensional conditional distributions are Gaussian Mixture Models with the same means and variances as priors, but with different weights pψ(sk | z1:k−1) (see Supplementary Materials). Computations for marginal probabilities in the general case are shown in Algorithm 1; conditional probabilities can be computed as a ratio between the joint and marginal probabilities. Note that we compute a normalizing constant on-the-fly. 3 Generative Models With Tensor Ring Induced Prior In this section, we describe how popular generative models—Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs)—can benefit from using Tensor Ring Induced Prior. 3.1 Variational Autoencoder Variational Autoencoder (VAE) [7, 13] is an autoencoder-based generative model that maps data points x onto a latent space with a probabilistic encoder qφ(z | x) and reconstructs objects with a probabilistic decoder pθ(x | z). We used a Gaussian encoder with the reparameterization trick: qφ(z | x) = N (z | µφ(x), σφ(x)) = N ( | 0, I) · σφ(x) + µφ(x). (6) Algorithm 1 Calculation of marginal probabilities in TRIP Input: A set M of variable indices for which we compute the probability, and values of these latent codes zi for i ∈M Output: Joint probability log p(zM ), where zM = {zi ∀i ∈M} Initialize Qbuff = I ∈ Rm1×m1 , Qnorm = I ∈ Rm1×m1 for j = 1 to d do if j is marginalized out (j /∈M ) then Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] ) else Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] · N ( zk | µ sj j , σ sj j )) end if Qnorm = Qnorm · (∑Nj−1 k=0 Qj [k] ) end for log p(zM ) = log Tr (Qbuff)− log Tr (Qnorm) The most common choice for a prior distribution pψ(z) in the latent space is a standard Gaussian distributionN (0, I). VAEs are trained by maximizing the lower bound of the log marginal likelihood log p(x), also known as the Evidence Lower Bound (ELBO): L(θ, φ, ψ) = Eqφ(z|x)log pθ(x | z)−KL ( qφ(z | x) || pψ(z) ) , (7) where KL is a Kullback-Leibler divergence. We get an unbiased estimate of L(θ, φ, ψ) by sampling i ∼ N (0, I) and computing a Monte Carlo estimate L(θ, φ, ψ) ≈ 1 l l∑ i=1 log ( pθ(x | zi)pψ(zi) qφ(zi | x) ) , zi = i · σφ(x) + µφ(x) (8) When pψ(z) is a standard Gaussian, the KL term can be computed analytically, reducing the estimation variance. For VAEs, flexible priors give tighter evidence lower bound [10, 14] and can help with a problem of the decoder ignoring the latent codes [14, 15]. In this work, we parameterize the learnable prior pψ(z) as a Tensor Ring Induced Prior model and train its parameters ψ jointly with encoder and decoder (Figure 2). We call this model a Variational Autoencoder with Tensor Ring Induced Prior (VAE-TRIP). We initialize the means and the variances by fitting 1D Gaussian Mixture models for each component using samples from the latent codes and initialize cores with a Gaussian noise. We then re-initialize means, variances and cores after the first epoch, and repeat such procedure every 5 epochs. 3.2 Generative Adversarial Networks Generative Adversarial Networks (GANs) [6] consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects produced by a generator. The generator, on the other hand, is trying to produce objects that the discriminator considers real. The optimization setup for all models from the GAN family is a min-max problem. For the standard GAN, the learning procedure alternates between optimizing the generator and the discriminator networks with a gradient descent/ascent: min G,ψ max D LGAN = Ex∼p(x) logD(x) + Ez∼pψ(z) log ( 1−D ( G(z) )) (9) Similar to VAE, the prior distribution pψ(z) is usually a standard Gaussian, although Gaussian Mixture Models were also previously studied [16]. In this work, we use a TRIP family of distributions to parameterize a multimodal prior of GANs (Figure 3). We expect that having multiple modes as the prior improves the overall quality of generation and helps to avoid anomalies during sampling, such as partially present eyeglasses. During training, we sample multiple latent codes from the prior pψ(z) and use REINFORCE [12] to propagate the gradient through the parameters ψ. We reduce the variance by using average discriminator output as a baseline: ∇ψLGAN ≈ 1 l l∑ i=1 ∇ψ log pψ(zi) di − 1 l l∑ j=1 dj , (10) where di = log ( 1−D ( G(z) )) is the discriminator’s output and zi are samples from the prior pψ(z). We call this model a Generative Adversarial Network with Tensor Ring Induced Prior (GAN-TRIP). We initialize means uniformly in a range [−1, 1] and standard deviations as 1/Nk. 4 Conditional Generation In conditional generation problem, data objects x (for example, face images) are coupled with properties y describing the objects (for example, sex and hair color). The goal of this model is to learn a distribution p(x | y) that produces objects with specified attributes. Some of the attributes y for a given x may be unknown (yun), and the model should learn solely from observed attributes (yob): p(x | yob). For VAE-TRIP, we train a joint model pψ(z, y) on all attributes y and latent codes z parameterized with a Tensor Ring. For discrete conditions, the joint distribution is: p(z, y) = ∑ s1:d P̃ [s1:d, y] d∏ j=1 N (zd | µsdd , σ sd d ), (11) where tensor P̃ [s1:d, y] is represented in a Tensor Ring decomposition. In this work, we focus on discrete attributes, although we can extend the model to continuous attributes with Gaussian Mixture Models as we did for the latent codes. With the proposed parameterization, we can marginalize out missing attributes and compute conditional probabilities. We can efficiently compute both probabilities similar to Algorithm 1. For conditional VAE model, the lower bound on log p(x, yob) is: L̃(θ, φ, ψ) = Eqφ(z|x,yob) log pθ(x, yob | z)−KL ( qφ(z | x, yob) || pψ(z) ) . (12) We simplify the lower bound by making two restrictions. First, we assume that the conditions y are fully defined by the object x, which implies qφ(z | x, yob) = qφ(z | x). For example, an image with a person wearing a hat defines the presence of a hat. The second restriction is that we can reconstruct an object directly from its latent code: pθ(x | z, yob) = pθ(x | z). This restriction also gives: pθ(x, yob | z) = pθ(x | z, yob)pψ(yob | z) = pθ(x | z)pψ(yob | z). (13) The resulting Evidence Lower Bound is L̃(θ, φ, ψ) = Eqφ(z|x) [ log pθ(x | z) + log pψ(yob | z) ] −KL ( qφ(z | x) || pψ(z) ) . (14) In the proposed model, an autoencoder learns to map objects onto a latent manifolds, while TRIP prior log pψ(z | yob) finds areas on the manifold corresponding to objects with the specified attributes. The quality of the model depends on the order of the latent codes and the conditions in pψ(z, y), since the Tensor Ring poorly captures dependence between variables that are far apart. In our experiments, we found that randomly permuting latent codes and conditions gives good results. We can train the proposed model on partially labeled datasets and use it to draw conditional samples with partially specified constraints. For example, we can ask the model to generate images of men in hats, not specifying hair color or the presence of glasses. 5 Related Work The most common generative models are based on Generative Adversarial Networks [6] or Variational Autoencoders [7]. Both GAN and VAE models usually use continuous unimodal distributions (like a standard Gaussian) as a prior. A space of natural images, however, is multimodal: a person either wears glasses or not—there are no intermediate states. Although generative models are flexible enough to transform unimodal distributions to multimodal, they tend to ignore some modes (mode collapse) or produce images with artifacts (half-present glasses). A few models with learnable prior distributions were proposed. Tomczak and Welling [10] used a Gaussian mixture model based on encoder proposals as a prior on the latent space of VAE. Chen et al. [14] and Rezende and Mohamed [17] applied normalizing flows [18–20] to transform a standard normal prior into a more complex latent distribution. [14, 15] applied auto-regressive models to learn better prior distribution over the latent variables. [21] proposed to update a prior distribution of a trained VAE to avoid samples that have low marginal posterior, but high prior probability. Similar to Tensor Ring decomposition, a Tensor-Train decomposition [22] is used in machine learning and numerical methods to represent tensors with a small number of parameters. Tensor-Train was applied to the compression of fully connected [23], convolutional [24] and recurrent [25] layers. In our models, we can use a Tensor-Train decomposition instead of Tensor Ring, but it requires larger cores to achieve comparable results, as first and last dimensions are farther apart. Most conditional models work with missing values by imputing them with a predictive model or setting them to a special value. With this approach, we cannot sample objects specifying conditions partially. VAE TELBO model [26] proposes to train a Product of Experts-based model, where the posterior on the latent codes is approximated as pψ(z | yob) = ∏ yi∈yob pψ(z | yi), requiring to train a separate posterior model for each condition. JMVAE model [27] contains three encoders that take both image and condition, only a condition, or only an image. 6 Experiments We conducted experiments on CelebFaces Attributes Dataset (CelebA) [28] of approximately 400,000 photos with a random train-test split. For conditional generation, we selected 14 binary image attributes, including sex, hair color, presence mustache, and beard. We compared both GAN and VAE models with and without TRIP. We also compared our best model with known approaches on CIFAR-10 [29] dataset with a standard split. Model architecture and training details are provided in Supplementary Materials. 6.1 Generating Objects With VAE-TRIP and GAN-TRIP We evaluate GAN-based models with and without Tensor Ring Learnable Prior by measuring a Fréchet Inception Distance (FID). For the baseline models, we used Wasserstein GAN (WGAN) [31] and Wasserstein GAN with Gradient Penalty (WGAN-GP) [32] on CelebA dataset. We also compared learnable priors with fixed randomly initialized parameters ψ. The results in Table 1 (CelebA) and Table 2 (CIFAR-10) suggest that with a TRIP prior the quality improves compared to standard models and models with GMM priors. In some experiments, the GMM-based model performed worse than a standard Gaussian, sinceKL had to be estimated with Monte-Carlo sampling, resulting in higher gradient variance. 6.2 Visualization of TRIP In Figure 4, we visualize first two dimensions of the learned prior pψ(z1, z2) in VAE-TRIP and WGAN-GP-TRIP models. For both models, prior uses most of the components to produce a complex distribution. Also, notice that the components learned different non-uniform weights. 6.3 Generated Images Here, we visualize the correspondence of modes and generated images by a procedure that we call mode hopping. We start by randomly sampling a latent code and producing the first image. After that, we randomly select five dimensions and sample them conditioned on the remaining dimensions. We repeat this procedure multiple times and obtain a sequence of sampled images shown in Figure 5. With these results, we see that similar images are localized in the learned prior space, and changes in a few dimensions change only a few fine-grained features. 6.4 Generated Conditional Images In this experiment, we generate images given a subset of attributes to estimate the diversity of generated images. For example, if we specify ‘Young man,’ we would expect different images to have different hair colors, presence and absence of glasses or hat. Generated images shown in Figure 3 indicate that the model learned to produce diverse images with multiple varying attributes. 7 Discussion We designed our prior utilizing Tensor Ring decomposition due to its higher representation capacity compared to other decompositions. For example, a Tensor Ring with core size m has the same capacity as a Tensor-Train with core size m2 [35]. Although the prior contains an exponential number of modes, in our experiments, its learnable parameters accounted for less than 10% of total weights, which did not cause overfitting. The results can be improved by increasing the core size m; however, the computational complexity has a cubic growth with the core size. We also implemented a conditional GAN but found the REINFORCE-based training of this model very unstable. Further research with variance reduction techniques might improve this approach. 8 Acknowledgements Image generation for Section 6.3 was supported by the Russian Science Foundation grant no. 17-71- 20072.
1. What is the focus of the paper regarding parametric distribution? 2. What are the strengths of the proposed method, particularly its effectiveness as a learnable prior? 3. What are the weaknesses of the paper, such as unfair comparisons in experiments? 4. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content?
Review
Review Thank you to the authors for performing these experiments and addressing the concerns raised by the reviewers. I am pleased to see the performance of TRIP in the context of flows as well. I recommend that this paper be accepted. == The authors present TRIP (Tensor Ring Induced Prior), a parametric family of distributions. These distributions are parameterized as a tensor ring decomposition (Zhao et al. 2016) by d "cores," which define a distribution over d discrete variables. A continuous distribution over R^n can be obtained by placing one Gaussian distribution for each value of the discrete variables, which corresponds to a mixture of a very large number of Gaussians (10^100 Gaussians in this paper). The authors then demonstrate the effectiveness of this parameterization as a learnable prior for VAEs and GANs. The authors justify this approach because the inherent multimodality of this parameterization may better suit the multimodal nature of natural images. The authors cite half-present glasses in the case of GANs trained on CelebA as a disadvantage of unimodal priors. Originality: This work is builds on a wide body of work on learned priors. This approach seems novel as far as I'm aware, although I'm not familiar with the related work on tensor decompositions. Quality: The authors carefully motivate, define, and experimentally test this approach in a wide variety of settings. One concern I have about the experimental setup is that the authors compare TRIP to a N(0, I) prior and a GMM prior. However these seem like unfair comparisons because TRIP has many more parameters than N(0, I) and GMM. It may be fairer to compare to a decoder with the same number of parameters as a TRIP-based decoder would have. I would also like to have gotten a better sense of how much slower a TRIP prior is to train compared to the standard approaches. Clarity: I found this paper to be well-written and easy to follow. The logic flows well from section to section. I very much appreciated the visualizations, especially Figure 1, 4, and 5. Significance: TRIP seems like a practical algorithm that can be used as a prior for VAEs and GANs, or more generally whenever a mixture of a large number of Gaussians is desired.
NIPS
Title A Prior of a Googol Gaussians: a Tensor Ring Induced Prior for Generative Models Abstract Generative models produce realistic objects in many domains, including text, image, video, and audio synthesis. Most popular models—Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs)—usually employ a standard Gaussian distribution as a prior. Previous works show that the richer family of prior distributions may help to avoid the mode collapse problem in GANs and to improve the evidence lower bound in VAEs. We propose a new family of prior distributions—Tensor Ring Induced Prior (TRIP)—that packs an exponential number of Gaussians into a high-dimensional lattice with a relatively small number of parameters. We show that these priors improve Fréchet Inception Distance for GANs and Evidence Lower Bound for VAEs. We also study generative models with TRIP in the conditional generation setup with missing conditions. Altogether, we propose a novel plug-and-play framework for generative models that can be utilized in any GAN and VAE-like architectures. 1 Introduction Modern generative models are widely applied to the generation of realistic and diverse images, text, and audio files [1–5]. Generative Adversarial Networks (GAN) [6], Variational Autoencoders (VAE) [7], and their variations are the most commonly used neural generative models. Both architectures learn a mapping from some prior distribution p(z)—usually a standard Gaussian—to the data distribution p(x). Previous works showed that richer prior distributions might improve the generative models—reduce mode collapse for GANs [8, 9] and obtain a tighter Evidence Lower Bound (ELBO) for VAEs [10]. If the prior p(z) lies in a parametric family, we can learn the most suitable distribution for it during training. In this work, we investigate Gaussian Mixture Models as prior distributions with an exponential number of Gaussians in nodes of a multidimensional lattice. In our experiments, we used a prior with more than a googol (10100) Gaussians. To handle such complex distributions, we represent p(z) using a Tensor Ring decomposition [11]—a method for approximating highdimensional tensors with a relatively small number of parameters. We call this family of distributions a Tensor Ring Induced Prior (TRIP). For this distribution, we can compute marginal and conditional probabilities and sample from them efficiently. We also extend TRIP to conditional generation, where a generative model p(x | y) produces new objects x with specified attributes y. With TRIP, we can produce new objects conditioned only on a subset of attributes, leaving some labels unspecified during both training and inference. Our main contributions are summarized as follows: ∗equal contribution 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. • We introduce a family of distributions that we call a Tensor Ring Induced Prior (TRIP) and use it as a prior for generative models—VAE, GAN, and its variations. • We investigate an application of TRIP to conditional generation and show that this prior improves quality on sparsely labeled datasets. • We evaluate TRIP models on the generation of CelebA faces for both conditional and unconditional setups. For GANs, we show improvement in Fréchet Inception Distance (FID) and improved ELBO for VAEs. For the conditional generation, we show lower rates of condition violation compared to standard conditional models. 2 Tensor Ring Induced Prior In this section, we introduce a Tensor Ring-induced distribution for both discrete and continuous variables. We also define a Tensor Ring Induced Prior (TRIP) family of distributions. 2.1 Tensor Ring decomposition Tensor Ring decomposition [11] represents large high-dimensional tensors (such as discrete distributions) with a relatively small number of parameters. Consider a joint distribution p(r1, r2, . . . rd) of d discrete random variables rk taking values from {0, 1, . . . Nk − 1}. We write these probabilities as elements of a d-dimensional tensor P [r1, r2, . . . rd] = p(r1, r2, . . . rd). For the brevity of notation, we use r1:d for (r1, . . . , rd). The number of elements in this tensor grows exponentially with the number of dimensions d, and for only 50 binary variables the tensor contains 250 ≈ 1015 real numbers. Tensor Ring decomposition reduces the number of parameters by approximating tensor P with low-rank non-negative tensors cores Qk ∈ R Nk×mk×mk+1 + , where m1, . . . ,md+1 are core sizes, and md+1 = m1: p(r1:d) ∝ P̂ [r1:d] = Tr ( d∏ j=1 Qj [rj ] ) (1) To compute P [r1:d], for each random variable rk, we slice a tensor Qk along the first dimension and obtain a matrix Qk[rk] ∈ R mk×mk+1 + . We multiply these matrices for all random variables and compute the trace of the resulting matrix to get a scalar (see Figure 1(b) for an example). In Tensor Ring decomposition, the number of parameters grows linearly with the number of dimensions. With larger core sizes mk, Tensor Ring decomposition can approximate more complex distributions. Note that the order of the variables matters: Tensor Ring decomposition better captures dependencies between closer variables than between the distant ones. With Tensor Ring decomposition, we can compute marginal distributions without computing the whole tensor P̂ [r1:d]. To marginalize out the random variable rk, we replace cores Qk in Eq 1 with matrix Q̃k = ∑Nk−1 rk=0 Qk[rk]: p(r1:k−1, rk+1:d) ∝ P̂ [r1:k−1, rk+1:d] = Tr ( k−1∏ j=1 Qj [rj ] · Q̃k · d∏ j=k+1 Qj [rj ] ) (2) In Supplementary Materials, we show an Algorithm for computing marginal distributions. We can also compute conditionals as a ratio between the joint and marginal probabilities p(A | B) = p(A,B)/p(B); we sample from conditional or marginal distributions using the chain rule. 2.2 Continuous Distributions parameterized with Tensor Ring Decomposition In this section, we apply the Tensor Ring decomposition to continuous distributions over vectors z = [z1, . . . , zd]. In our Learnable Prior model, we assume that each component of zk is a Gaussian Mixture Model withNk fully factorized components. The joint distribution p(z) is a multidimensional Gaussian Mixture Model with modes placed in the nodes of a multidimensional lattice (Figure 1(a)). The latent discrete variables s1, . . . , sd indicate the index of mixture component for each dimension (sk corresponds to the k-th dimension of the latent code zk): p(z1:d) = ∑ s1:d p(s1:d)p(z1:d | s1:d) ∝ ∑ s1:d P̂ [s1:d] d∏ j=1 N (zj | µ sj j , σ sj j ) (3) Here, p(s) is a discrete distribution of prior probabilities of mixture components, which we store as a tensor P̂ [s] in a Tensor Ring decomposition. Note that p(s) is not a factorized distribution, and the learnable prior p(z) may learn complex weightings of the mixture components. We call the family of distributions parameterized in this form a Tensor Ring Induced Prior (TRIP) and denote its learnable parameters (cores, means, and standard deviations) as ψ: ψ = { Q1, . . . , Qd, µ 0 1, . . . , µ Nd−1 d , σ 0 1 , . . . , σ Nd−1 d } . (4) To highlight that the prior distribution is learnable, we further write it as pψ(z). As we show later, we can optimize ψ directly using gradient descent for VAE models and REINFORCE [12] for GANs. An important property of the proposed TRIP family is that we can derive its one-dimensional conditional distributions in a closed form. For example, to sample using a chain rule, we need distributions pψ(zk | z1:k−1): pψ(zk | z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk, z1:k−1) = Nk−1∑ sk=0 pψ(sk | z1:k−1)pψ(zk | sk) = Nk−1∑ sk=0 pψ(sk | z1:k−1)N (zk | µskk , σ sk k ) (5) From Equation 5 we notice that one-dimensional conditional distributions are Gaussian Mixture Models with the same means and variances as priors, but with different weights pψ(sk | z1:k−1) (see Supplementary Materials). Computations for marginal probabilities in the general case are shown in Algorithm 1; conditional probabilities can be computed as a ratio between the joint and marginal probabilities. Note that we compute a normalizing constant on-the-fly. 3 Generative Models With Tensor Ring Induced Prior In this section, we describe how popular generative models—Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs)—can benefit from using Tensor Ring Induced Prior. 3.1 Variational Autoencoder Variational Autoencoder (VAE) [7, 13] is an autoencoder-based generative model that maps data points x onto a latent space with a probabilistic encoder qφ(z | x) and reconstructs objects with a probabilistic decoder pθ(x | z). We used a Gaussian encoder with the reparameterization trick: qφ(z | x) = N (z | µφ(x), σφ(x)) = N ( | 0, I) · σφ(x) + µφ(x). (6) Algorithm 1 Calculation of marginal probabilities in TRIP Input: A set M of variable indices for which we compute the probability, and values of these latent codes zi for i ∈M Output: Joint probability log p(zM ), where zM = {zi ∀i ∈M} Initialize Qbuff = I ∈ Rm1×m1 , Qnorm = I ∈ Rm1×m1 for j = 1 to d do if j is marginalized out (j /∈M ) then Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] ) else Qbuff = Qbuff · (∑Nj−1 k=0 Qj [k] · N ( zk | µ sj j , σ sj j )) end if Qnorm = Qnorm · (∑Nj−1 k=0 Qj [k] ) end for log p(zM ) = log Tr (Qbuff)− log Tr (Qnorm) The most common choice for a prior distribution pψ(z) in the latent space is a standard Gaussian distributionN (0, I). VAEs are trained by maximizing the lower bound of the log marginal likelihood log p(x), also known as the Evidence Lower Bound (ELBO): L(θ, φ, ψ) = Eqφ(z|x)log pθ(x | z)−KL ( qφ(z | x) || pψ(z) ) , (7) where KL is a Kullback-Leibler divergence. We get an unbiased estimate of L(θ, φ, ψ) by sampling i ∼ N (0, I) and computing a Monte Carlo estimate L(θ, φ, ψ) ≈ 1 l l∑ i=1 log ( pθ(x | zi)pψ(zi) qφ(zi | x) ) , zi = i · σφ(x) + µφ(x) (8) When pψ(z) is a standard Gaussian, the KL term can be computed analytically, reducing the estimation variance. For VAEs, flexible priors give tighter evidence lower bound [10, 14] and can help with a problem of the decoder ignoring the latent codes [14, 15]. In this work, we parameterize the learnable prior pψ(z) as a Tensor Ring Induced Prior model and train its parameters ψ jointly with encoder and decoder (Figure 2). We call this model a Variational Autoencoder with Tensor Ring Induced Prior (VAE-TRIP). We initialize the means and the variances by fitting 1D Gaussian Mixture models for each component using samples from the latent codes and initialize cores with a Gaussian noise. We then re-initialize means, variances and cores after the first epoch, and repeat such procedure every 5 epochs. 3.2 Generative Adversarial Networks Generative Adversarial Networks (GANs) [6] consist of two networks: a generator G(z) and a discriminator D(x). The discriminator is trying to distinguish real objects from objects produced by a generator. The generator, on the other hand, is trying to produce objects that the discriminator considers real. The optimization setup for all models from the GAN family is a min-max problem. For the standard GAN, the learning procedure alternates between optimizing the generator and the discriminator networks with a gradient descent/ascent: min G,ψ max D LGAN = Ex∼p(x) logD(x) + Ez∼pψ(z) log ( 1−D ( G(z) )) (9) Similar to VAE, the prior distribution pψ(z) is usually a standard Gaussian, although Gaussian Mixture Models were also previously studied [16]. In this work, we use a TRIP family of distributions to parameterize a multimodal prior of GANs (Figure 3). We expect that having multiple modes as the prior improves the overall quality of generation and helps to avoid anomalies during sampling, such as partially present eyeglasses. During training, we sample multiple latent codes from the prior pψ(z) and use REINFORCE [12] to propagate the gradient through the parameters ψ. We reduce the variance by using average discriminator output as a baseline: ∇ψLGAN ≈ 1 l l∑ i=1 ∇ψ log pψ(zi) di − 1 l l∑ j=1 dj , (10) where di = log ( 1−D ( G(z) )) is the discriminator’s output and zi are samples from the prior pψ(z). We call this model a Generative Adversarial Network with Tensor Ring Induced Prior (GAN-TRIP). We initialize means uniformly in a range [−1, 1] and standard deviations as 1/Nk. 4 Conditional Generation In conditional generation problem, data objects x (for example, face images) are coupled with properties y describing the objects (for example, sex and hair color). The goal of this model is to learn a distribution p(x | y) that produces objects with specified attributes. Some of the attributes y for a given x may be unknown (yun), and the model should learn solely from observed attributes (yob): p(x | yob). For VAE-TRIP, we train a joint model pψ(z, y) on all attributes y and latent codes z parameterized with a Tensor Ring. For discrete conditions, the joint distribution is: p(z, y) = ∑ s1:d P̃ [s1:d, y] d∏ j=1 N (zd | µsdd , σ sd d ), (11) where tensor P̃ [s1:d, y] is represented in a Tensor Ring decomposition. In this work, we focus on discrete attributes, although we can extend the model to continuous attributes with Gaussian Mixture Models as we did for the latent codes. With the proposed parameterization, we can marginalize out missing attributes and compute conditional probabilities. We can efficiently compute both probabilities similar to Algorithm 1. For conditional VAE model, the lower bound on log p(x, yob) is: L̃(θ, φ, ψ) = Eqφ(z|x,yob) log pθ(x, yob | z)−KL ( qφ(z | x, yob) || pψ(z) ) . (12) We simplify the lower bound by making two restrictions. First, we assume that the conditions y are fully defined by the object x, which implies qφ(z | x, yob) = qφ(z | x). For example, an image with a person wearing a hat defines the presence of a hat. The second restriction is that we can reconstruct an object directly from its latent code: pθ(x | z, yob) = pθ(x | z). This restriction also gives: pθ(x, yob | z) = pθ(x | z, yob)pψ(yob | z) = pθ(x | z)pψ(yob | z). (13) The resulting Evidence Lower Bound is L̃(θ, φ, ψ) = Eqφ(z|x) [ log pθ(x | z) + log pψ(yob | z) ] −KL ( qφ(z | x) || pψ(z) ) . (14) In the proposed model, an autoencoder learns to map objects onto a latent manifolds, while TRIP prior log pψ(z | yob) finds areas on the manifold corresponding to objects with the specified attributes. The quality of the model depends on the order of the latent codes and the conditions in pψ(z, y), since the Tensor Ring poorly captures dependence between variables that are far apart. In our experiments, we found that randomly permuting latent codes and conditions gives good results. We can train the proposed model on partially labeled datasets and use it to draw conditional samples with partially specified constraints. For example, we can ask the model to generate images of men in hats, not specifying hair color or the presence of glasses. 5 Related Work The most common generative models are based on Generative Adversarial Networks [6] or Variational Autoencoders [7]. Both GAN and VAE models usually use continuous unimodal distributions (like a standard Gaussian) as a prior. A space of natural images, however, is multimodal: a person either wears glasses or not—there are no intermediate states. Although generative models are flexible enough to transform unimodal distributions to multimodal, they tend to ignore some modes (mode collapse) or produce images with artifacts (half-present glasses). A few models with learnable prior distributions were proposed. Tomczak and Welling [10] used a Gaussian mixture model based on encoder proposals as a prior on the latent space of VAE. Chen et al. [14] and Rezende and Mohamed [17] applied normalizing flows [18–20] to transform a standard normal prior into a more complex latent distribution. [14, 15] applied auto-regressive models to learn better prior distribution over the latent variables. [21] proposed to update a prior distribution of a trained VAE to avoid samples that have low marginal posterior, but high prior probability. Similar to Tensor Ring decomposition, a Tensor-Train decomposition [22] is used in machine learning and numerical methods to represent tensors with a small number of parameters. Tensor-Train was applied to the compression of fully connected [23], convolutional [24] and recurrent [25] layers. In our models, we can use a Tensor-Train decomposition instead of Tensor Ring, but it requires larger cores to achieve comparable results, as first and last dimensions are farther apart. Most conditional models work with missing values by imputing them with a predictive model or setting them to a special value. With this approach, we cannot sample objects specifying conditions partially. VAE TELBO model [26] proposes to train a Product of Experts-based model, where the posterior on the latent codes is approximated as pψ(z | yob) = ∏ yi∈yob pψ(z | yi), requiring to train a separate posterior model for each condition. JMVAE model [27] contains three encoders that take both image and condition, only a condition, or only an image. 6 Experiments We conducted experiments on CelebFaces Attributes Dataset (CelebA) [28] of approximately 400,000 photos with a random train-test split. For conditional generation, we selected 14 binary image attributes, including sex, hair color, presence mustache, and beard. We compared both GAN and VAE models with and without TRIP. We also compared our best model with known approaches on CIFAR-10 [29] dataset with a standard split. Model architecture and training details are provided in Supplementary Materials. 6.1 Generating Objects With VAE-TRIP and GAN-TRIP We evaluate GAN-based models with and without Tensor Ring Learnable Prior by measuring a Fréchet Inception Distance (FID). For the baseline models, we used Wasserstein GAN (WGAN) [31] and Wasserstein GAN with Gradient Penalty (WGAN-GP) [32] on CelebA dataset. We also compared learnable priors with fixed randomly initialized parameters ψ. The results in Table 1 (CelebA) and Table 2 (CIFAR-10) suggest that with a TRIP prior the quality improves compared to standard models and models with GMM priors. In some experiments, the GMM-based model performed worse than a standard Gaussian, sinceKL had to be estimated with Monte-Carlo sampling, resulting in higher gradient variance. 6.2 Visualization of TRIP In Figure 4, we visualize first two dimensions of the learned prior pψ(z1, z2) in VAE-TRIP and WGAN-GP-TRIP models. For both models, prior uses most of the components to produce a complex distribution. Also, notice that the components learned different non-uniform weights. 6.3 Generated Images Here, we visualize the correspondence of modes and generated images by a procedure that we call mode hopping. We start by randomly sampling a latent code and producing the first image. After that, we randomly select five dimensions and sample them conditioned on the remaining dimensions. We repeat this procedure multiple times and obtain a sequence of sampled images shown in Figure 5. With these results, we see that similar images are localized in the learned prior space, and changes in a few dimensions change only a few fine-grained features. 6.4 Generated Conditional Images In this experiment, we generate images given a subset of attributes to estimate the diversity of generated images. For example, if we specify ‘Young man,’ we would expect different images to have different hair colors, presence and absence of glasses or hat. Generated images shown in Figure 3 indicate that the model learned to produce diverse images with multiple varying attributes. 7 Discussion We designed our prior utilizing Tensor Ring decomposition due to its higher representation capacity compared to other decompositions. For example, a Tensor Ring with core size m has the same capacity as a Tensor-Train with core size m2 [35]. Although the prior contains an exponential number of modes, in our experiments, its learnable parameters accounted for less than 10% of total weights, which did not cause overfitting. The results can be improved by increasing the core size m; however, the computational complexity has a cubic growth with the core size. We also implemented a conditional GAN but found the REINFORCE-based training of this model very unstable. Further research with variance reduction techniques might improve this approach. 8 Acknowledgements Image generation for Section 6.3 was supported by the Russian Science Foundation grant no. 17-71- 20072.
1. What is the focus and contribution of the paper on deep generative models? 2. What are the strengths of the proposed approach, particularly in terms of its elegance and tractability? 3. What are the weaknesses of the paper, especially regarding its experimental section and comparisons with other works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions regarding the computational cost and training time of the proposed method?
Review
Review # Overall This paper introduces a complex prior (TRIP) for deep generative models. TRIP has tractable marginal and conditional distributions and can produce an exponential number of mixtures of Gaussian with a small number of parameters. Overall, the paper is well written, the proposed technique is elegant and the motivation is clear. The main weakness is the experiment. # Weaknesses - Some important related works are discussed in Sec.5 but not compared directly in the experiments. What is gained by TRIP vs autoregressive priors [12,13] or flow-based priors [15]? There are no quantitative comparisons between training the generative models with TRIP and with other advanced parametrized priors. - What is the computational cost of TRIP? Since TRIP introduces additional parameters for the prior and brings extra computation, it is worth knowing that how much it slows down the training.
NIPS
Title Mixture Matrix Completion Abstract Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. N/A Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. 1 Introduction Matrix completion aims to estimate the missing entries of an incomplete data matrix X. One of its main motivations arises in recommender systems, where each row represents an item, and each column represents a user. We only observe an entry in X whenever a user rates an item, and the goal is to predict unseen ratings in order to make good recommendations. Related Work. In 2009, Candès and Recht [1] introduced low-rank matrix completion (LRMC), arguably the most popular model for this task. LRMC assumes that each column (user) can be represented as a linear combination of a few others, whence X is low-rank. Later in 2012, Eriksson et. al. [2] introduced high-rank matrix completion (HRMC), also known as subspace clustering with missing data. This more general model assumes that each column of X comes from one of several low-rank matrices, thus allowing several types of users. Since their inceptions, both LRMC and HRMC have attracted a tremendous amount of attention (see [1–27] for a very incomplete list). Paper contributions. This paper introduces an even more general model: mixture matrix completion (MMC), which assumes that each entry in X (rather than column) comes from one out of several low-rank matrices, and the goal is to recover the matrices in the mixture. Figure 1 illustrates the generalization from LRMC to HRMC and to MMC. One of the main motivations behind MMC is that users often share the same account, and so each column in X may contain ratings from several users. Nonetheless, as we show in Section 2, MMC is also a more accurate model for many other contemporary applications, including networks inference, computer vision, and metagenomics. This paper makes several fundamental contributions about MMC: – Well posedness. First, we show that MMC is theoretically possible if we observe the right entries and the mixture is generic (precise definitions below). 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. – Identifiability conditions. We provide precise information-theoretical conditions on the entries that need to be observed such that a mixture of K low-rank matrices is identifiable. These extend similar recent results of LRMC [3] and HRMC [4] to the setting of MMC. The subtlety in proving these results is that there could exist false mixtures that agree with the observed entries, even if the sampling is uniquely completable for LRMC and HRMC (see Example 1). In other words, there exits samplings that are identifiable for LRMC (and HRMC) but are not identifiable for MMC, and so in general it is not enough to simply have K times more samples. Hence, it was necessary to derive identifiability conditions for MMC, similar to those of LRMC in [3] and HRMC in [4]. We point out that in contrast to typical completion theory [1, 2, 5–20], these type of identifiability conditions are deterministic (not restricted to uniform sampling), and make no coherence assumptions. – Sample complexity. If X ∈ Rd×n is a mixture of K rank-r matrices, we show that with high probability, our identifiability conditions will be met if each entry is observed with probability O(K d max{r, log d}), thus deriving the sample complexity of MMC, which is the same as the sample complexity of HRMC [4], and simplifies to O( 1 d max{r, log d}) in the case of K = 1, which corresponds to the sample complexity of LRMC [3]. Intuitively, this means that informationtheoretically, we virtually pay no price for mixing low-rank matrices. – Practical algorithm. Our identifiability results follow from a combinatorial analysis that is infeasible in practice. To address this, we give a practical alternating algorithm for MMC whose performance (in the more difficult problem of MMC) is comparable to state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. 2 Motivating Applications Besides recommender systems, there are many important applications where data can be modeled as a mixture of low-rank matrices. Here are a few examples motivated by current data science challenges. Networks Inference. Estimating the topology of a network (internet, sensor networks, biological networks, social networks) has been the subject of a large body of research in recent years [28–34]. To this end, companies routinely collect distances between nodes (e.g., computers) that connect with monitors (e.g., Google, Amazon, Facebook) in a data matrix X. In a simplified model, if node j is in subnet k, then the jth column can be modeled as the sum of (i) the distance between node j and router k, and (ii) the distance between router k and each of the monitors. Hence, the columns (nodes) corresponding to each subnet form a low-rank matrix, which is precisely the model assumed by HRMC. However, depending on the network’s traffic, each node may use different routes to communicate at different times. Consequently, the same column in X may contain measurements from different low-rank matrices. In other words, distance matrices of networks are a mixture of low-rank matrices. Computer Vision. Background segmentation is one of the most fundamental and crucial tasks in computer vision, yet it can be tremendously challenging. The vectorized frames of a video can be modeled as columns with some entries (pixels) in a low-rank background, and some outlier entries, corresponding to the foreground. Typical methods, like the acclaimed Robust PCA (principal component analysis) [35–46], assume that the foreground is sparse and has no particular structure. However, in many situations this is not the case. For instance, since the location of an object in consecutive frames is highly correlated, the foreground can be highly structured. Similarly, the foreground may not be sparse, specially if there are foreground objects moving close to the camera (e.g., in a selfie). Even state-of-the-art methods fail in scenarios like these, which are not covered by current models (see Figure 3 for an example). In contrast, MMC allows to use one matrix in the mixture to represent the background, other matrices to represent foreground objects (small or large, even dominant), and even other matrices to account for occlusions and other illumination/visual artifacts. Hence, MMC can be a more accurate model for video segmentation and other image processing tasks, including inpainting [47] and face clustering, which we explore in our experiments. Metagenomics. One contemporary challenge in Biology is to quantify the presence of different types of bacteria in a system (e.g., the human gut microbiome) [48–52]. The main idea is to collect several DNA samples from such a system, and use their genomic information to count the number of bacteria of each type (the genome of each bacterium determines its type). In practice, to obtain an organism’s genome (e.g., a person’s genome), biologists feed a DNA sample (e.g., blood or hair) to a sequencer machine that produces a series of reads, which are short genomic sequences that can later be assembled and aligned to recover the entire genome. The challenge arises when the sequencer is provided a sample with DNA from multiple organisms, as is the case in the human gut microbiome, where any sample will contain a mixture of DNA from multiple bacteria that cannot be disentangled into individual bacterium. In this case, each read produced by the sequencer may correspond to a different type of bacteria. Consequently, each DNA sample (column) may contain genes (rows) from different types of bacteria, which is precisely the model that MMC describes. 3 Problem Statement Let X1, . . . ,XK ∈ Rd×n be a set of rank-r matrices, and let Ω1, . . . ,Ωk ∈ {0, 1}d×n indicate disjoint sets of observed entries. Suppose X1, . . . ,XK and Ω1, . . . ,ΩK are unknown, and we only observe XΩ, defined as follows: – If the (i, j)th entry of Ωk is 1, then the (i, j)th entry of XΩ is equal to the (i, j) th entry of Xk. – If the (i, j)th entry of Ωk is 0 for every k = 1, . . . ,K, then the (i, j)th entry of XΩ is missing. This way Ωk indicates the entries of XΩ that correspond to X k, and Ω := ∑K k=1 Ω k indicates the set of all observed entries. Since Ω1, . . . ,ΩK are disjoint, Ω ∈ {0, 1}d×n. Equivalently, each observed entry of XΩ corresponds to an entry in either X 1 or X2 or . . . or XK (i.e., there are no collisions). In words, XΩ contains a mixture of entries from several low-rank matrices. The goal of MMC is to recover all the columns of X1, . . . ,XK that have observations in XΩ (see Figure 1 to build some intuition). In our recommendations example, a column xω ∈ XΩ will contain entries from Xk whenever xω contains ratings from a user of the k th type. Similarly, the same column will contain entries from Xℓ whenever it also contains ratings from a user of the ℓth type. We would like to predict the preferences of both users, or more generally, all users that have ratings in xω . On the other hand, if xω has no entries from X k, then xω involves no users of the k th type, and so it would be impossible (and futile) to try to recover such column of Xk. In MMC, the matrices Ω 1, . . . ,ΩK play the role of the hidden variables constantly present in mixture problems. Notice that if we knew Ω1, . . . ,ΩK, then we could partition XΩ accordingly, and estimate X 1, . . . ,XK using standard LRMC. The challenge is that we do not know Ω1, . . . ,ΩK. 3.1 The Subtleties of MMC The main theoretical difficulty of MMC is that depending on the pattern of missing data, there could exist false mixtures. That is, matrices X̃1, . . . , X̃K, other than X1, . . . ,XK, that agree with XΩ, even if X1, . . . ,XK are observed on uniquely completable patterns for LRMC. Example 1. Consider the next rank-1 matrices X1,X2, and their partially observed mixture XΩ: X 1 = 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 , X2 = 1 2 3 4 2 4 6 8 3 6 9 12 4 8 12 16 5 10 15 20 , XΩ = 1 · 3 4 1 2 · 8 3 2 3 · 4 8 3 4 · 10 15 4 . We can verify that X1 and X2 are observed on uniquely completable sampling patterns for LRMC [3]. Nonetheless, we can construct the following false rank-1 matrices that agree with XΩ: X̃ 1 = 60 40 15 4 1 2/3 1/4 1/15 3 2 3/4 1/5 12 8 3 4/5 60 40 15 4 , X̃2 = 1 1/4 3 1 8 2 24 8 1 1/4 3 1 4 1 12 4 40 10 120 40 . This shows that even with unlimited computational power, if we exhaustively search all the identifiable patterns for LRMC, we can end up with false mixtures. Hence the importance of studying the identifiable patterns for MMC. False mixtures arise because we do not know a priori which entries of XΩ correspond to each X k. Hence, it is possible that a rank-r matrix X̃ agrees with some entries from X1, other entries from X2, and so on. Furthermore, X̃ may even be the only rank-r matrix that agrees with such combination of entries, as in Example 1. Remark 1. Recall that LRMC and HRMC are tantamount to identifying the subspace(s) containing the columns of X [3, 4]. In fact, if we knew such subspaces, LRMC and HRMC become almost trivial problems (see Appendix A for details). Similarly, if no data is missing, HRMC simplifies to subspace clustering, which has been studied extensively, and is now reasonably well-understood [53–62]. In contrast, MMC remains challenging even if the subspaces corresponding to the low-rank matrices in the mixture are known, and even X is fully observed. We refer the curious reader to Appendix A, and point out the bottom row and the last column in Figure 2, which show the MMC error when the underlying subspaces are known, and when X is fully observed. 4 Main Theoretical Results Example 1 shows the importance of studying the identifiable patterns for MMC, which we do now. First recall that r + 1 samples per column are necessary for LRMC [3]. This implies that even if an oracle told us Ω1, . . . ,ΩK, if we intend to recover a column of Xk, we need to observe it on at least r + 1 entries. Hence we assume without loss of generality that: (A1) Each column of Ωk has either 0 or r + 1 non-zero entries. In words, A1 requires that each column of Xk to be recovered is observed on exactly r + 1 entries. Of course, observing more entries may only aid completion. Hence, rather than an assumption, A1 describes the most difficult scenario where we have the bare minimum amount of information required for completion. We use A1 to ease notation, exposition and analysis. All our results can be easily extended to the case where A1 is droped (see Remark 2). Without further assumptions on X, completion (of any kind) may be impossible. To see this consider the simple example where X is only supported on the ith row. Then it would be impossible to recover X unless all columns were observed on the ith row. In most completion applications this would be unlikely. For example, in a movies recommender system like Netflix, this would require that all the users watched (and rated) the same movie. To rule out scenarios like these, typical completion theory requires incoherence and uniform sampling. Incoherence guarantees that the information is well-spread over the matrix. Uniform sampling guarantees that all rows and columns are sufficiently sampled. However, it is usually unclear (and generally unverifiable) whether an incomplete matrix is coherent. Furthermore, observations are hardly ever uniformly distributed. For instance, we do not expect children to watch adults movies. To avoid these issues, instead of incoherence we will assume that X is a generic mixture of low-rank matrices. More precisely, we assume that: (A2) X1, . . . ,XK are drawn independently according to an absolutely continuous distribution with respect to the Lebesgue measure on the determinantal variety (set of all d × n, rank-r matrices). A2 essentially requires that each Xk is a generic rank-r matrix. This type of genericity assumptions are becoming increasingly common in studies of LRMC, HRMC, and related problems [3, 4, 23– 27, 46]. See Appendix C for a further discussion on A2, and its relation to other common assumptions from the literature. With this, we are ready to present our main theorem. It gives a deterministic condition on Ω to guarantee that X1, . . . ,XK can be identified from XΩ. This provides information-theoretic requirements for MMC. The proof is in Appendix B. Theorem 1. Let A1-A2 hold. Suppose there exist matrices {Ωτ} r+1 τ=1 formed with disjoint subsets of (d− r + 1) columns of Ωk, such that for every τ : (†) Every matrix Ω′ formed with a proper subset of the columns in Ωτ has at least r fewer columns than non-zero rows. Then all the columns of Xk that have observations in XΩ are identifiable. In words, Theorem 1 states that MMC is possible as long as we observe the right entries in each Xk. The intuition is that each of these entries imposes a constraint on what X1, . . . ,XK may be, and the pattern in Ω determines whether these constraints are redundant. Patterns satisfying the conditions of Theorem 1 guarantee that X1, . . . ,XK is the only mixture that satisfies the constraints produced by the observed entries. Remark 2. Recall that r + 1 samples per column are strictly necessary for completion. A1 requires that we have exactly that minimum number of samples. If Xk is observed on more than r + 1 entries per column, it suffices that Ωk contains a pattern satisfying the conditions of Theorem 1. Theorem 1 shows that MMC is possible if the samplings satisfy certain combinatorial conditions. Our next result shows that if each entry of Xk is observed on XΩ with probability O( 1 d max{r, log d}), then with high probability Ωk will satisfy such conditions. The proof is in Appendix B. Theorem 2. Suppose r ≤ d 6 and n ≥ (r + 1)(d− r + 1). Let ǫ > 0 be given. Suppose that an entry of XΩ is equal to the corresponding entry of X k with probability p ≥ 2 d max { 2r, 12 ( log(d ǫ ) + 1 )} . Then Ωk satisfies the sampling conditions of Theorem 1 with probability ≥ 1− 2(r + 1)ǫ. Theorem 2 shows that the sample complexity of MMC is O(Kmax{r, log d}) observations per column of XΩ. This is exactly the same as the sample complexity of HRMC [4], and simplifies to O(max{r, log d}) if K = 1, corresponding to the sample complexity of LRMC [3]. Intuitively, this means that information-theoretically, we virtually pay no price for mixing low-rank matrices. 5 Alternating Algorithm for MMC Theorems 1 and 2 show that MMC is theoretically possible under reasonable conditions (virtually the same as LRMC and HRMC). However, these results follow from a combinatorial analysis that is infeasible in practice (see Appendix B for details). To address this, we derive a practical alternating algorithm for MMC, which we call AMMC (alternating mixture matrix completion). The main idea is that MMC, like most mixture problems, can be viewed as a clustering task: if we could determine the entries of XΩ that correspond to each X k, then we would be able to partition XΩ into K incomplete low-rank matrices, and then complete them using standard LRMC. The question is how to determine which entries of XΩ correspond to each X k, i.e., how to determine Ω1, . . . ,ΩK. To address this, let Uk ∈ Rd×r be a basis for the subspace containing the columns of Xk, and let xω denote the j th column of XΩ, observed only on the entries indexed by ω ⊂ {1, . . . , d}. For any subspace, matrix or vector that is compatible with a set of indices ·, we use the subscript · to denote its restriction to the coordinates/rows in ·. For example, Uk ω ∈ R|ω|×r denotes the restriction of Uk to the indices in ω. Suppose xω contains entries from X k, and let ωk ⊂ ω index such entries. Then our goal is to determine ωk, as that would tell us the jth column of Ωk. Since x ω k ∈ span{Uk ω k}, we can restate our goal as finding the set ωk ⊂ ω such that x ω k ∈ span{Uk ω k}. To find ωk, let υ ⊂ ω, and let Pk υ := Uk υ (UkT υ U k υ )−1UkT υ denote the projection operator onto span{Uk υ }. Recall that ‖Pk υ xυ‖ ≤ ‖xυ‖, with equality if and only if xυ ∈ span{U k υ }. It follows that ωk is the largest set υ such that ‖Pk υ xυ‖ = ‖xυ‖. In other words, ω k is the solution to argmax υ⊂ω ‖Pk υ xυ‖ − ‖xυ‖ + |υ|. (1) However, (1) is non-convex. Hence, in order to find the solution to (1), we propose the following erasure strategy. The main idea is to start our search with υ = ω, and then iteratively remove the entries (coordinates) of υ that most increase the gap between ‖Pk υ xυ‖ and ‖xυ‖ (hence the term erasure). We stop this procedure when ‖Pk υ xυ‖ is equal to ‖xυ‖ (or close enough). More precisely, we initialize υ = ω, and then iteratively redefine υ as the set υ = υ\i, where i = argmax i∈υ ‖Pk υ\ixυ\i‖ − ‖xυ\i‖. (2) In words, i is the coordinate of the vector xυ such that if ignored, the gap between the remaining vector x υ\i and its projection P k υ\ixυ\i is reduced the most. At each iteration we remove (erase) such coordinate i from υ. The intuition behind this approach is that the coordinates of xυ that do not correspond to Xk are more likely to increase the gap between ‖Pk υ xυ‖ and ‖xυ‖. Notice that if Uk is in general position (guaranteed by A2) and |υ| ≤ r, then Uk υ = R|υ| (because Uk is r-dimensional). In such case, it is trivially true that xυ ∈ span{U k υ }, whence ‖Pk υ xυ‖ = ‖xυ‖. Hence the procedure above is guaranteed to terminate after at most ‖ω‖ − r iterations. At such point, |υ| = r, and we know that we were unable to find ωk (or a subset of it). One alternative is to start with a different υ0 ( ω, and search again. This procedure may remove some entries from ωk along the way, so in general, the output of this process will be a set υ ⊂ ωk. However, finding a subset of ωk is enough to find ωk. To see this, recall that since x ω k ∈ span{Uk ω k}, there is a coefficient vector θ k ∈ Rr such that x ω k = Uk ω kθ k. Since υ ⊂ ωk, it follows that xυ = U k υ θ k. Furthermore, since |υ| ≥ r, we can find θk as θ k = (UkT υ U k υ )−1UkT υ xυ. Since xωk = U k ω kθ k, at this point we can identify ωk by simple inspection (the matching entries in xω and U k ω θ k). Recall that ωk determines the jth column of Ω k. Hence, if we repeat the procedure above for each column in XΩ and each k, we can recover Ω 1, . . . ,ΩK. After this, we can use standard LRMC on XΩ1 , . . . ,XΩK to recover X 1, . . .XK (which is the ultimate goal of MMC). The catch here is that this procedure requires knowing Uk, which we do not know. So essentially we have a chicken and egg problem: (i) if we knew Uk, we would be able to find Ωk. (ii) If we knew Ω k we would be able to find Uk (and Xk, using standard LRMC on XΩk). Since we know neither, we use a common technique for these kind of problems: alternate between finding Ωk and Uk. More precisely, we start with some initial guesses Û1, . . . , ÛK, and then alternate between the following two steps until convergence: (i) Cluster. Let xω be the j th column in XΩ. For each k = 1, . . . ,K, we first erase entries from ω to obtain a set υ ⊂ ω indicating entries likely to correspond to Xk. This erasure procedure initializes υ = ω, and then repeats (2), (replacing Pk with P̂k, which denotes the projection operator onto span{Ûk}) until we to obtain a set υ ⊂ ω such that the projection ‖P̂k υ xυ‖ is close to ‖xυ‖. This way, the entries of xυ are likely to correspond to X k. Using these entries, we can estimate the coefficient of the jth column of Xk with respect to Uk, given by θ̂k = (ÛkT υ kÛ k υ k) −1 Û kT υ kxυk . With θ̂ k we can also estimate the jth column of Xk as x̂ k := Ûkθ̂k. Notice that both υ and x̂k are obtained using Ûk, which may be different from U k. It follows that υ may contain some entries that do not correspond to Xk, and x̂k may be inaccurate. Hence, in general, xω and x̂ k ω will have no matching entries, and so we cannot identify ωk by simple inspection, as before. However, we can repeat our procedure for each k to obtain estimates x̂1 ω , . . . , x̂K ω , and then assign each entry of xω to its closest match. More precisely, our estimate ω̂k ⊂ ω (indicating the entries of xω that we estimate that correspond to Xk) will contain entry i ∈ ω if |xi − x̂ k i | ≤ |xi − x̂ ℓ i | for every ℓ = 1, . . . ,K. Repeating this procedure for each column of XΩ will produce estimates Ω̂ 1, . . . , Ω̂K. Specifically, the jth column of Ω̂k ∈ {0, 1}d×n will contain a 1 in the rows indicated by ω̂k. (ii) Complete. For each k, complete X Ω̂k using your favorite LRMC algorithm. Then compute a new estimate Ûk given by the leading r left singular vectors of the completion of X Ω̂k . The entire procedure is summarized in Algorithm 1, in Appendix D, where we also discuss initialization, generalizations to noise and outliers, and other simple extensions to improve performance. 6 Experiments Simulations. We first present a series of synthetic experiments to study the performance of AMMC (Algorithm 1). In our simulations we first generate matrices Uk ∈ Rd×r and Θk ∈ Rr×n with i.i.d. N(0, 1) entries to use as bases and coefficients of the low-rank matrices in the mixture, i.e., X k = UkΘk ∈ Rd×n. Here d = n = 100, r = 5 and K = 2. With probability (1− p), the (i, j)th entry of XΩ will be missing, and with probability p/K it will be equal to the corresponding entry in X k. Recall that similar to EM and other alternating approaches, AMMC depends on initialization. Hence, we study the performance of AMMC as a function of both p and the distance δ ∈ [0, 1] between {Uk} and their initial estimates (measured as the normalized Frobenius norm of the difference between their projection operators). We measure accuracy using the normalized Frobenius norm of the difference between each Xk and its completion. We considered a success if this quantity was below 10−8. The results of 100 trials are summarized in Figure 2. Notice that the performance of AMMC decays nicely with the distance δ between the true subspaces U k and their initial estimates. We can see this type of behavior in similar state-of-the-art alternating algorithms for the simpler problem of HRMC [19]. Since MMC is highly non-convex, it is not surprising that if the initial estimates are poor (far from the truth), then AMMC may converge to a local minimum. Similarly, the performance of AMMC decays nicely with the fraction of observed entries p. Notice that even if X is fully observed (p = 1), if the initial estimates are very far from the true subspaces (δ = 1), then AMMC performs poorly. This shows, consistent with our discussing in Remark 1, that in practice MMC is a challenging problem even if X is fully observed. Hence, it is quite remarkable that AMMC works most of the time with as little as p ≈ 0.6, corresponding to observing ≈ 0.3 of the entries in each Xk. To put this under perspective, notice (Figure 2) that this is comparable the amount of missing data tolerated by GSSC [19] and LMaFit [11], which are state-of-the-art for the simpler problems of HRMC (special case of MMC where all entries in each column of X correspond to the same Xk) and LRMC (special case where there is only one Xk). To obtain Figure 2 we replicated the same setup as above, but with data generated according to the HRMC and LRMC models. Hence, we conclude that the performance of AMMC (in the more difficult problem of MMC) is comparable to the performance of state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. We point out that according to Theorems 1 and 2, MMC is theoretically possible with p ≥ 1/2. However, we can see that (even if U1, . . . ,UK are known, corresponding to δ = 0 in Figure 2) the performance of AMMC is quite poor if p < 0.6. This shows two things: (i) MMC is challenging even if U1, . . . ,UK are known (as discussed in Remark 1), and (ii) there is a gap between what is information-theoretically possible and what is currently possible in practice (with AMMC). In future work we will explore algorithms that can approach the information-theoretic limits. Real Data: Face Clustering and Inpainting. It is well-known that images of an individual’s face are approximately low-rank [63]. Natural images, however, usually contain faces of multiple individuals, often partially occluding each other, resulting in a mixture of low-rank matrices. In this experiment we demonstrate the power of MMC in two tasks: first, classifying partially occluded faces in an image, and second, image inpainting [47]. To this end, we use the Yale B dataset [64], containing 2432 photos of 38 subjects (64 photos per subject), each photo of size 48× 42. We randomly select two subjects, and vectorize and concatenate their images to obtain two approximately rank-10 matrices X 1,X2 ∈ R2016×64. Next we combine them into X ∈ R2016×64, whose each entry is equal to the corresponding entry in X1 or X2 with equal probability. This way, each column of X contains a mixed image with pixels from multiple individuals. We aim at two goals: (i) classify the entries in X according to X1 and X2, which in turn means locating and classifying the face of each individual in each image, and (ii) recover X1 and X2 from X, thus reconstructing the unobserved pixels in each image (inpainting). We repeat this experiment 30 times using AMMC (with gaussian random initialization, known to produce near-orthogonal subspaces with high probability), obtaining a pixel classification error of 2.98%, and a reconstruction error of 4.1%, which is remarkable in light that the ideal rank-10 approximation (no mixture, and full data) achieves 1.8%. Figure 3 shows an example, with more in Figure 4 in Appendix E. Notice that in this case we cannot compare against other methods, as AMMC is the first, and currently the only method for MMC. Real Data: MMC for Background Segmentation. As discussed in Section 2, Robust PCA models a video as the superposition of a low-rank background plus a sparse foreground with no structure. MMC brings more flexibility, allowing multiple low-rank matrices to model background, structured foreground objects (sparse or abundant) and illumination artifacts, while at the same time also accounting for outliers (the entries/pixels that were assigned to no matrix in the mixture). In fact, contrary to Robust PCA, MMC allows a very large (even dominant) fraction of outliers. In this experiment we test AMMC in the task of background segmentation, using the Wallflower [65] and the I2R [66] datasets, containing videos of traffic cameras, lobbies, and pedestrians in the street. For each video, we compare AMMC (with gaussian random initialization) against the best result amongst the following state-of-the-art algorithms for Robust PCA: [35–39]. We chose these methods based on the comprehensive review in [40], and previous reports [41–43] indicating that these algorithms typically performed as well or better than several others, including [44, 45]. In most cases, both Robust PCA and AMMC perform quite similarly (see Figure 5 in Appendix E). However, in one case AMMC achieves 87.67% segmentation accuracy (compared with the ground truth, manually segmented), while Robust PCA only achieves 74.88% (Figure 3). Our hypothesis is that this is due to the large portion of outliers (foreground). It is out of the scope of this paper, but of interest for future work, to collect real datasets with similar properties, where AMMC can be further tested. We point out, however, that AMMC is orders of magnitude slower than Robust PCA. Our future work will also focus on developing faster methods for MMC.
1. What is the main contribution of the paper regarding matrix completion and subspace clustering? 2. What are the sufficient conditions provided by the paper for recoverability? 3. How does the paper present a heuristic for the problem, and how does it relate to other algorithms? 4. What is the assessment of the reviewer regarding the novelty and difficulty of the problem addressed in the paper? 5. What is the overall recommendation of the reviewer regarding the acceptance of the paper?
Review
Review The paper considers an interesting problem that generalizes both subspace clustering and matrix completion, and provides some sufficient conditions for recoverability. The setting is the following: we are given a matrix A that is a "mixture" of K low-rank matrices, and the goal is to recover them. By a mixture, we mean that every non-zero entry of A is equal to the entry of one of the low-rank matrices. The paper presents a (somewhat strong) sufficient condition under which the original matrix can be recovered in an information theoretic sense. The problem turns out to be fairly difficult, so some such assumptions are necessary for unique recovery. The condition has the feel of Hall's condition for matching; it is also shown to hold w.h.p. in a natural probabilistic model (akin to the one used in "regular" matrix completion, K=1). Overall I find the result interesting. The condition is one that is combinatorial, and it seems unlikely that one can verify if a set of matrices \Omega satisfy the condition in polynomial time. Also, the paper presents some heuristics for the problem, which resembles the K-SVD algorithm for sparse coding, and at an even higher level, Lloyd's algorithm. It's a natural heuristic, and authors show that it works fairly well in applications. Overall, the paper has some interesting ideas about proving unique recovery for a pretty difficult matrix problem. I recommend acceptance.
NIPS
Title Mixture Matrix Completion Abstract Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. N/A Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. 1 Introduction Matrix completion aims to estimate the missing entries of an incomplete data matrix X. One of its main motivations arises in recommender systems, where each row represents an item, and each column represents a user. We only observe an entry in X whenever a user rates an item, and the goal is to predict unseen ratings in order to make good recommendations. Related Work. In 2009, Candès and Recht [1] introduced low-rank matrix completion (LRMC), arguably the most popular model for this task. LRMC assumes that each column (user) can be represented as a linear combination of a few others, whence X is low-rank. Later in 2012, Eriksson et. al. [2] introduced high-rank matrix completion (HRMC), also known as subspace clustering with missing data. This more general model assumes that each column of X comes from one of several low-rank matrices, thus allowing several types of users. Since their inceptions, both LRMC and HRMC have attracted a tremendous amount of attention (see [1–27] for a very incomplete list). Paper contributions. This paper introduces an even more general model: mixture matrix completion (MMC), which assumes that each entry in X (rather than column) comes from one out of several low-rank matrices, and the goal is to recover the matrices in the mixture. Figure 1 illustrates the generalization from LRMC to HRMC and to MMC. One of the main motivations behind MMC is that users often share the same account, and so each column in X may contain ratings from several users. Nonetheless, as we show in Section 2, MMC is also a more accurate model for many other contemporary applications, including networks inference, computer vision, and metagenomics. This paper makes several fundamental contributions about MMC: – Well posedness. First, we show that MMC is theoretically possible if we observe the right entries and the mixture is generic (precise definitions below). 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. – Identifiability conditions. We provide precise information-theoretical conditions on the entries that need to be observed such that a mixture of K low-rank matrices is identifiable. These extend similar recent results of LRMC [3] and HRMC [4] to the setting of MMC. The subtlety in proving these results is that there could exist false mixtures that agree with the observed entries, even if the sampling is uniquely completable for LRMC and HRMC (see Example 1). In other words, there exits samplings that are identifiable for LRMC (and HRMC) but are not identifiable for MMC, and so in general it is not enough to simply have K times more samples. Hence, it was necessary to derive identifiability conditions for MMC, similar to those of LRMC in [3] and HRMC in [4]. We point out that in contrast to typical completion theory [1, 2, 5–20], these type of identifiability conditions are deterministic (not restricted to uniform sampling), and make no coherence assumptions. – Sample complexity. If X ∈ Rd×n is a mixture of K rank-r matrices, we show that with high probability, our identifiability conditions will be met if each entry is observed with probability O(K d max{r, log d}), thus deriving the sample complexity of MMC, which is the same as the sample complexity of HRMC [4], and simplifies to O( 1 d max{r, log d}) in the case of K = 1, which corresponds to the sample complexity of LRMC [3]. Intuitively, this means that informationtheoretically, we virtually pay no price for mixing low-rank matrices. – Practical algorithm. Our identifiability results follow from a combinatorial analysis that is infeasible in practice. To address this, we give a practical alternating algorithm for MMC whose performance (in the more difficult problem of MMC) is comparable to state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. 2 Motivating Applications Besides recommender systems, there are many important applications where data can be modeled as a mixture of low-rank matrices. Here are a few examples motivated by current data science challenges. Networks Inference. Estimating the topology of a network (internet, sensor networks, biological networks, social networks) has been the subject of a large body of research in recent years [28–34]. To this end, companies routinely collect distances between nodes (e.g., computers) that connect with monitors (e.g., Google, Amazon, Facebook) in a data matrix X. In a simplified model, if node j is in subnet k, then the jth column can be modeled as the sum of (i) the distance between node j and router k, and (ii) the distance between router k and each of the monitors. Hence, the columns (nodes) corresponding to each subnet form a low-rank matrix, which is precisely the model assumed by HRMC. However, depending on the network’s traffic, each node may use different routes to communicate at different times. Consequently, the same column in X may contain measurements from different low-rank matrices. In other words, distance matrices of networks are a mixture of low-rank matrices. Computer Vision. Background segmentation is one of the most fundamental and crucial tasks in computer vision, yet it can be tremendously challenging. The vectorized frames of a video can be modeled as columns with some entries (pixels) in a low-rank background, and some outlier entries, corresponding to the foreground. Typical methods, like the acclaimed Robust PCA (principal component analysis) [35–46], assume that the foreground is sparse and has no particular structure. However, in many situations this is not the case. For instance, since the location of an object in consecutive frames is highly correlated, the foreground can be highly structured. Similarly, the foreground may not be sparse, specially if there are foreground objects moving close to the camera (e.g., in a selfie). Even state-of-the-art methods fail in scenarios like these, which are not covered by current models (see Figure 3 for an example). In contrast, MMC allows to use one matrix in the mixture to represent the background, other matrices to represent foreground objects (small or large, even dominant), and even other matrices to account for occlusions and other illumination/visual artifacts. Hence, MMC can be a more accurate model for video segmentation and other image processing tasks, including inpainting [47] and face clustering, which we explore in our experiments. Metagenomics. One contemporary challenge in Biology is to quantify the presence of different types of bacteria in a system (e.g., the human gut microbiome) [48–52]. The main idea is to collect several DNA samples from such a system, and use their genomic information to count the number of bacteria of each type (the genome of each bacterium determines its type). In practice, to obtain an organism’s genome (e.g., a person’s genome), biologists feed a DNA sample (e.g., blood or hair) to a sequencer machine that produces a series of reads, which are short genomic sequences that can later be assembled and aligned to recover the entire genome. The challenge arises when the sequencer is provided a sample with DNA from multiple organisms, as is the case in the human gut microbiome, where any sample will contain a mixture of DNA from multiple bacteria that cannot be disentangled into individual bacterium. In this case, each read produced by the sequencer may correspond to a different type of bacteria. Consequently, each DNA sample (column) may contain genes (rows) from different types of bacteria, which is precisely the model that MMC describes. 3 Problem Statement Let X1, . . . ,XK ∈ Rd×n be a set of rank-r matrices, and let Ω1, . . . ,Ωk ∈ {0, 1}d×n indicate disjoint sets of observed entries. Suppose X1, . . . ,XK and Ω1, . . . ,ΩK are unknown, and we only observe XΩ, defined as follows: – If the (i, j)th entry of Ωk is 1, then the (i, j)th entry of XΩ is equal to the (i, j) th entry of Xk. – If the (i, j)th entry of Ωk is 0 for every k = 1, . . . ,K, then the (i, j)th entry of XΩ is missing. This way Ωk indicates the entries of XΩ that correspond to X k, and Ω := ∑K k=1 Ω k indicates the set of all observed entries. Since Ω1, . . . ,ΩK are disjoint, Ω ∈ {0, 1}d×n. Equivalently, each observed entry of XΩ corresponds to an entry in either X 1 or X2 or . . . or XK (i.e., there are no collisions). In words, XΩ contains a mixture of entries from several low-rank matrices. The goal of MMC is to recover all the columns of X1, . . . ,XK that have observations in XΩ (see Figure 1 to build some intuition). In our recommendations example, a column xω ∈ XΩ will contain entries from Xk whenever xω contains ratings from a user of the k th type. Similarly, the same column will contain entries from Xℓ whenever it also contains ratings from a user of the ℓth type. We would like to predict the preferences of both users, or more generally, all users that have ratings in xω . On the other hand, if xω has no entries from X k, then xω involves no users of the k th type, and so it would be impossible (and futile) to try to recover such column of Xk. In MMC, the matrices Ω 1, . . . ,ΩK play the role of the hidden variables constantly present in mixture problems. Notice that if we knew Ω1, . . . ,ΩK, then we could partition XΩ accordingly, and estimate X 1, . . . ,XK using standard LRMC. The challenge is that we do not know Ω1, . . . ,ΩK. 3.1 The Subtleties of MMC The main theoretical difficulty of MMC is that depending on the pattern of missing data, there could exist false mixtures. That is, matrices X̃1, . . . , X̃K, other than X1, . . . ,XK, that agree with XΩ, even if X1, . . . ,XK are observed on uniquely completable patterns for LRMC. Example 1. Consider the next rank-1 matrices X1,X2, and their partially observed mixture XΩ: X 1 = 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 , X2 = 1 2 3 4 2 4 6 8 3 6 9 12 4 8 12 16 5 10 15 20 , XΩ = 1 · 3 4 1 2 · 8 3 2 3 · 4 8 3 4 · 10 15 4 . We can verify that X1 and X2 are observed on uniquely completable sampling patterns for LRMC [3]. Nonetheless, we can construct the following false rank-1 matrices that agree with XΩ: X̃ 1 = 60 40 15 4 1 2/3 1/4 1/15 3 2 3/4 1/5 12 8 3 4/5 60 40 15 4 , X̃2 = 1 1/4 3 1 8 2 24 8 1 1/4 3 1 4 1 12 4 40 10 120 40 . This shows that even with unlimited computational power, if we exhaustively search all the identifiable patterns for LRMC, we can end up with false mixtures. Hence the importance of studying the identifiable patterns for MMC. False mixtures arise because we do not know a priori which entries of XΩ correspond to each X k. Hence, it is possible that a rank-r matrix X̃ agrees with some entries from X1, other entries from X2, and so on. Furthermore, X̃ may even be the only rank-r matrix that agrees with such combination of entries, as in Example 1. Remark 1. Recall that LRMC and HRMC are tantamount to identifying the subspace(s) containing the columns of X [3, 4]. In fact, if we knew such subspaces, LRMC and HRMC become almost trivial problems (see Appendix A for details). Similarly, if no data is missing, HRMC simplifies to subspace clustering, which has been studied extensively, and is now reasonably well-understood [53–62]. In contrast, MMC remains challenging even if the subspaces corresponding to the low-rank matrices in the mixture are known, and even X is fully observed. We refer the curious reader to Appendix A, and point out the bottom row and the last column in Figure 2, which show the MMC error when the underlying subspaces are known, and when X is fully observed. 4 Main Theoretical Results Example 1 shows the importance of studying the identifiable patterns for MMC, which we do now. First recall that r + 1 samples per column are necessary for LRMC [3]. This implies that even if an oracle told us Ω1, . . . ,ΩK, if we intend to recover a column of Xk, we need to observe it on at least r + 1 entries. Hence we assume without loss of generality that: (A1) Each column of Ωk has either 0 or r + 1 non-zero entries. In words, A1 requires that each column of Xk to be recovered is observed on exactly r + 1 entries. Of course, observing more entries may only aid completion. Hence, rather than an assumption, A1 describes the most difficult scenario where we have the bare minimum amount of information required for completion. We use A1 to ease notation, exposition and analysis. All our results can be easily extended to the case where A1 is droped (see Remark 2). Without further assumptions on X, completion (of any kind) may be impossible. To see this consider the simple example where X is only supported on the ith row. Then it would be impossible to recover X unless all columns were observed on the ith row. In most completion applications this would be unlikely. For example, in a movies recommender system like Netflix, this would require that all the users watched (and rated) the same movie. To rule out scenarios like these, typical completion theory requires incoherence and uniform sampling. Incoherence guarantees that the information is well-spread over the matrix. Uniform sampling guarantees that all rows and columns are sufficiently sampled. However, it is usually unclear (and generally unverifiable) whether an incomplete matrix is coherent. Furthermore, observations are hardly ever uniformly distributed. For instance, we do not expect children to watch adults movies. To avoid these issues, instead of incoherence we will assume that X is a generic mixture of low-rank matrices. More precisely, we assume that: (A2) X1, . . . ,XK are drawn independently according to an absolutely continuous distribution with respect to the Lebesgue measure on the determinantal variety (set of all d × n, rank-r matrices). A2 essentially requires that each Xk is a generic rank-r matrix. This type of genericity assumptions are becoming increasingly common in studies of LRMC, HRMC, and related problems [3, 4, 23– 27, 46]. See Appendix C for a further discussion on A2, and its relation to other common assumptions from the literature. With this, we are ready to present our main theorem. It gives a deterministic condition on Ω to guarantee that X1, . . . ,XK can be identified from XΩ. This provides information-theoretic requirements for MMC. The proof is in Appendix B. Theorem 1. Let A1-A2 hold. Suppose there exist matrices {Ωτ} r+1 τ=1 formed with disjoint subsets of (d− r + 1) columns of Ωk, such that for every τ : (†) Every matrix Ω′ formed with a proper subset of the columns in Ωτ has at least r fewer columns than non-zero rows. Then all the columns of Xk that have observations in XΩ are identifiable. In words, Theorem 1 states that MMC is possible as long as we observe the right entries in each Xk. The intuition is that each of these entries imposes a constraint on what X1, . . . ,XK may be, and the pattern in Ω determines whether these constraints are redundant. Patterns satisfying the conditions of Theorem 1 guarantee that X1, . . . ,XK is the only mixture that satisfies the constraints produced by the observed entries. Remark 2. Recall that r + 1 samples per column are strictly necessary for completion. A1 requires that we have exactly that minimum number of samples. If Xk is observed on more than r + 1 entries per column, it suffices that Ωk contains a pattern satisfying the conditions of Theorem 1. Theorem 1 shows that MMC is possible if the samplings satisfy certain combinatorial conditions. Our next result shows that if each entry of Xk is observed on XΩ with probability O( 1 d max{r, log d}), then with high probability Ωk will satisfy such conditions. The proof is in Appendix B. Theorem 2. Suppose r ≤ d 6 and n ≥ (r + 1)(d− r + 1). Let ǫ > 0 be given. Suppose that an entry of XΩ is equal to the corresponding entry of X k with probability p ≥ 2 d max { 2r, 12 ( log(d ǫ ) + 1 )} . Then Ωk satisfies the sampling conditions of Theorem 1 with probability ≥ 1− 2(r + 1)ǫ. Theorem 2 shows that the sample complexity of MMC is O(Kmax{r, log d}) observations per column of XΩ. This is exactly the same as the sample complexity of HRMC [4], and simplifies to O(max{r, log d}) if K = 1, corresponding to the sample complexity of LRMC [3]. Intuitively, this means that information-theoretically, we virtually pay no price for mixing low-rank matrices. 5 Alternating Algorithm for MMC Theorems 1 and 2 show that MMC is theoretically possible under reasonable conditions (virtually the same as LRMC and HRMC). However, these results follow from a combinatorial analysis that is infeasible in practice (see Appendix B for details). To address this, we derive a practical alternating algorithm for MMC, which we call AMMC (alternating mixture matrix completion). The main idea is that MMC, like most mixture problems, can be viewed as a clustering task: if we could determine the entries of XΩ that correspond to each X k, then we would be able to partition XΩ into K incomplete low-rank matrices, and then complete them using standard LRMC. The question is how to determine which entries of XΩ correspond to each X k, i.e., how to determine Ω1, . . . ,ΩK. To address this, let Uk ∈ Rd×r be a basis for the subspace containing the columns of Xk, and let xω denote the j th column of XΩ, observed only on the entries indexed by ω ⊂ {1, . . . , d}. For any subspace, matrix or vector that is compatible with a set of indices ·, we use the subscript · to denote its restriction to the coordinates/rows in ·. For example, Uk ω ∈ R|ω|×r denotes the restriction of Uk to the indices in ω. Suppose xω contains entries from X k, and let ωk ⊂ ω index such entries. Then our goal is to determine ωk, as that would tell us the jth column of Ωk. Since x ω k ∈ span{Uk ω k}, we can restate our goal as finding the set ωk ⊂ ω such that x ω k ∈ span{Uk ω k}. To find ωk, let υ ⊂ ω, and let Pk υ := Uk υ (UkT υ U k υ )−1UkT υ denote the projection operator onto span{Uk υ }. Recall that ‖Pk υ xυ‖ ≤ ‖xυ‖, with equality if and only if xυ ∈ span{U k υ }. It follows that ωk is the largest set υ such that ‖Pk υ xυ‖ = ‖xυ‖. In other words, ω k is the solution to argmax υ⊂ω ‖Pk υ xυ‖ − ‖xυ‖ + |υ|. (1) However, (1) is non-convex. Hence, in order to find the solution to (1), we propose the following erasure strategy. The main idea is to start our search with υ = ω, and then iteratively remove the entries (coordinates) of υ that most increase the gap between ‖Pk υ xυ‖ and ‖xυ‖ (hence the term erasure). We stop this procedure when ‖Pk υ xυ‖ is equal to ‖xυ‖ (or close enough). More precisely, we initialize υ = ω, and then iteratively redefine υ as the set υ = υ\i, where i = argmax i∈υ ‖Pk υ\ixυ\i‖ − ‖xυ\i‖. (2) In words, i is the coordinate of the vector xυ such that if ignored, the gap between the remaining vector x υ\i and its projection P k υ\ixυ\i is reduced the most. At each iteration we remove (erase) such coordinate i from υ. The intuition behind this approach is that the coordinates of xυ that do not correspond to Xk are more likely to increase the gap between ‖Pk υ xυ‖ and ‖xυ‖. Notice that if Uk is in general position (guaranteed by A2) and |υ| ≤ r, then Uk υ = R|υ| (because Uk is r-dimensional). In such case, it is trivially true that xυ ∈ span{U k υ }, whence ‖Pk υ xυ‖ = ‖xυ‖. Hence the procedure above is guaranteed to terminate after at most ‖ω‖ − r iterations. At such point, |υ| = r, and we know that we were unable to find ωk (or a subset of it). One alternative is to start with a different υ0 ( ω, and search again. This procedure may remove some entries from ωk along the way, so in general, the output of this process will be a set υ ⊂ ωk. However, finding a subset of ωk is enough to find ωk. To see this, recall that since x ω k ∈ span{Uk ω k}, there is a coefficient vector θ k ∈ Rr such that x ω k = Uk ω kθ k. Since υ ⊂ ωk, it follows that xυ = U k υ θ k. Furthermore, since |υ| ≥ r, we can find θk as θ k = (UkT υ U k υ )−1UkT υ xυ. Since xωk = U k ω kθ k, at this point we can identify ωk by simple inspection (the matching entries in xω and U k ω θ k). Recall that ωk determines the jth column of Ω k. Hence, if we repeat the procedure above for each column in XΩ and each k, we can recover Ω 1, . . . ,ΩK. After this, we can use standard LRMC on XΩ1 , . . . ,XΩK to recover X 1, . . .XK (which is the ultimate goal of MMC). The catch here is that this procedure requires knowing Uk, which we do not know. So essentially we have a chicken and egg problem: (i) if we knew Uk, we would be able to find Ωk. (ii) If we knew Ω k we would be able to find Uk (and Xk, using standard LRMC on XΩk). Since we know neither, we use a common technique for these kind of problems: alternate between finding Ωk and Uk. More precisely, we start with some initial guesses Û1, . . . , ÛK, and then alternate between the following two steps until convergence: (i) Cluster. Let xω be the j th column in XΩ. For each k = 1, . . . ,K, we first erase entries from ω to obtain a set υ ⊂ ω indicating entries likely to correspond to Xk. This erasure procedure initializes υ = ω, and then repeats (2), (replacing Pk with P̂k, which denotes the projection operator onto span{Ûk}) until we to obtain a set υ ⊂ ω such that the projection ‖P̂k υ xυ‖ is close to ‖xυ‖. This way, the entries of xυ are likely to correspond to X k. Using these entries, we can estimate the coefficient of the jth column of Xk with respect to Uk, given by θ̂k = (ÛkT υ kÛ k υ k) −1 Û kT υ kxυk . With θ̂ k we can also estimate the jth column of Xk as x̂ k := Ûkθ̂k. Notice that both υ and x̂k are obtained using Ûk, which may be different from U k. It follows that υ may contain some entries that do not correspond to Xk, and x̂k may be inaccurate. Hence, in general, xω and x̂ k ω will have no matching entries, and so we cannot identify ωk by simple inspection, as before. However, we can repeat our procedure for each k to obtain estimates x̂1 ω , . . . , x̂K ω , and then assign each entry of xω to its closest match. More precisely, our estimate ω̂k ⊂ ω (indicating the entries of xω that we estimate that correspond to Xk) will contain entry i ∈ ω if |xi − x̂ k i | ≤ |xi − x̂ ℓ i | for every ℓ = 1, . . . ,K. Repeating this procedure for each column of XΩ will produce estimates Ω̂ 1, . . . , Ω̂K. Specifically, the jth column of Ω̂k ∈ {0, 1}d×n will contain a 1 in the rows indicated by ω̂k. (ii) Complete. For each k, complete X Ω̂k using your favorite LRMC algorithm. Then compute a new estimate Ûk given by the leading r left singular vectors of the completion of X Ω̂k . The entire procedure is summarized in Algorithm 1, in Appendix D, where we also discuss initialization, generalizations to noise and outliers, and other simple extensions to improve performance. 6 Experiments Simulations. We first present a series of synthetic experiments to study the performance of AMMC (Algorithm 1). In our simulations we first generate matrices Uk ∈ Rd×r and Θk ∈ Rr×n with i.i.d. N(0, 1) entries to use as bases and coefficients of the low-rank matrices in the mixture, i.e., X k = UkΘk ∈ Rd×n. Here d = n = 100, r = 5 and K = 2. With probability (1− p), the (i, j)th entry of XΩ will be missing, and with probability p/K it will be equal to the corresponding entry in X k. Recall that similar to EM and other alternating approaches, AMMC depends on initialization. Hence, we study the performance of AMMC as a function of both p and the distance δ ∈ [0, 1] between {Uk} and their initial estimates (measured as the normalized Frobenius norm of the difference between their projection operators). We measure accuracy using the normalized Frobenius norm of the difference between each Xk and its completion. We considered a success if this quantity was below 10−8. The results of 100 trials are summarized in Figure 2. Notice that the performance of AMMC decays nicely with the distance δ between the true subspaces U k and their initial estimates. We can see this type of behavior in similar state-of-the-art alternating algorithms for the simpler problem of HRMC [19]. Since MMC is highly non-convex, it is not surprising that if the initial estimates are poor (far from the truth), then AMMC may converge to a local minimum. Similarly, the performance of AMMC decays nicely with the fraction of observed entries p. Notice that even if X is fully observed (p = 1), if the initial estimates are very far from the true subspaces (δ = 1), then AMMC performs poorly. This shows, consistent with our discussing in Remark 1, that in practice MMC is a challenging problem even if X is fully observed. Hence, it is quite remarkable that AMMC works most of the time with as little as p ≈ 0.6, corresponding to observing ≈ 0.3 of the entries in each Xk. To put this under perspective, notice (Figure 2) that this is comparable the amount of missing data tolerated by GSSC [19] and LMaFit [11], which are state-of-the-art for the simpler problems of HRMC (special case of MMC where all entries in each column of X correspond to the same Xk) and LRMC (special case where there is only one Xk). To obtain Figure 2 we replicated the same setup as above, but with data generated according to the HRMC and LRMC models. Hence, we conclude that the performance of AMMC (in the more difficult problem of MMC) is comparable to the performance of state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. We point out that according to Theorems 1 and 2, MMC is theoretically possible with p ≥ 1/2. However, we can see that (even if U1, . . . ,UK are known, corresponding to δ = 0 in Figure 2) the performance of AMMC is quite poor if p < 0.6. This shows two things: (i) MMC is challenging even if U1, . . . ,UK are known (as discussed in Remark 1), and (ii) there is a gap between what is information-theoretically possible and what is currently possible in practice (with AMMC). In future work we will explore algorithms that can approach the information-theoretic limits. Real Data: Face Clustering and Inpainting. It is well-known that images of an individual’s face are approximately low-rank [63]. Natural images, however, usually contain faces of multiple individuals, often partially occluding each other, resulting in a mixture of low-rank matrices. In this experiment we demonstrate the power of MMC in two tasks: first, classifying partially occluded faces in an image, and second, image inpainting [47]. To this end, we use the Yale B dataset [64], containing 2432 photos of 38 subjects (64 photos per subject), each photo of size 48× 42. We randomly select two subjects, and vectorize and concatenate their images to obtain two approximately rank-10 matrices X 1,X2 ∈ R2016×64. Next we combine them into X ∈ R2016×64, whose each entry is equal to the corresponding entry in X1 or X2 with equal probability. This way, each column of X contains a mixed image with pixels from multiple individuals. We aim at two goals: (i) classify the entries in X according to X1 and X2, which in turn means locating and classifying the face of each individual in each image, and (ii) recover X1 and X2 from X, thus reconstructing the unobserved pixels in each image (inpainting). We repeat this experiment 30 times using AMMC (with gaussian random initialization, known to produce near-orthogonal subspaces with high probability), obtaining a pixel classification error of 2.98%, and a reconstruction error of 4.1%, which is remarkable in light that the ideal rank-10 approximation (no mixture, and full data) achieves 1.8%. Figure 3 shows an example, with more in Figure 4 in Appendix E. Notice that in this case we cannot compare against other methods, as AMMC is the first, and currently the only method for MMC. Real Data: MMC for Background Segmentation. As discussed in Section 2, Robust PCA models a video as the superposition of a low-rank background plus a sparse foreground with no structure. MMC brings more flexibility, allowing multiple low-rank matrices to model background, structured foreground objects (sparse or abundant) and illumination artifacts, while at the same time also accounting for outliers (the entries/pixels that were assigned to no matrix in the mixture). In fact, contrary to Robust PCA, MMC allows a very large (even dominant) fraction of outliers. In this experiment we test AMMC in the task of background segmentation, using the Wallflower [65] and the I2R [66] datasets, containing videos of traffic cameras, lobbies, and pedestrians in the street. For each video, we compare AMMC (with gaussian random initialization) against the best result amongst the following state-of-the-art algorithms for Robust PCA: [35–39]. We chose these methods based on the comprehensive review in [40], and previous reports [41–43] indicating that these algorithms typically performed as well or better than several others, including [44, 45]. In most cases, both Robust PCA and AMMC perform quite similarly (see Figure 5 in Appendix E). However, in one case AMMC achieves 87.67% segmentation accuracy (compared with the ground truth, manually segmented), while Robust PCA only achieves 74.88% (Figure 3). Our hypothesis is that this is due to the large portion of outliers (foreground). It is out of the scope of this paper, but of interest for future work, to collect real datasets with similar properties, where AMMC can be further tested. We point out, however, that AMMC is orders of magnitude slower than Robust PCA. Our future work will also focus on developing faster methods for MMC.
1. What is the focus of the paper regarding machine learning tools for low-rank and incomplete data? 2. What are the strengths of the paper, particularly in its theoretical analysis and practical algorithm? 3. Do you have any concerns or questions about the paper's contributions? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review This paper presents mixture matrix completion (MMC) as a novel machine learning tool for learning from low-rank and incomplete data. MMC is a problem that is similar to the problem of subspace clustering with missing data, but is more difficult. Specifically, in MMC the data is assumed to lie in a union of (unknown) low-dimensional subspaces, but the data is not fully observed: only a few entries of each data point are observed, and (unlike subspace clustering with missing data) there is no information as to which entries correspond to the same point. Therefore, one would need to estimate the assignment of entries to data points, the assignment of data points to subspaces, the missing entries, and the subspaces altogether. The major contribution of this paper is the introduction of the MMC problem, a theoretical analysis for when the problem of MMC is well-defined, and an alternating estimation algorithm for solving the MMC problem. Strengths: - The paper presents a new machine learning problem formulation that seems natural for addressing practical tasks such as background segmentation. It also has a nice preliminary study of this problem in terms of atheoretical analysis of the identifiability problem, a simple practical algorithm, and experiments on real and synthetic data. - The paper is well written and the logic flow is clear. Weaknesses: - My major concern with the paper is that the theoretical results seem to be stated in vague terms and I don't fully understand them. In Theorem 1, what does it mean to say that Omega^k "contains" disjoint matrices Omega_tau? Does it mean that Omega^k is a stack of matrices Omega_tau column-wise? Also, what does it mean that it is "possible" to perfectly recover all columns of X^k? Does it mean that the subspaces and the missing entries can be uniquely determined from the observed data? In addition, how is this result compared with the deterministic result for the problem of subspace clustering with missing entries in [4]? Overall this is a nice paper that brings up a new problem formulation into attention. The study of this problem is still preliminary, though, as one can clearly see that there is no very successful application of the proposed method yet. The experiment on face clustering has a very unrealistic test scenario, and the experiment for background segmentation does not generate results as good as classical robust PCA. Nonetheless, addressing these challenges could be the topic of future study. My reason for not having a higher rating is that I cannot fully appreciate the theoretical studies as mentioned above. Additional comments: - It appears that the "cluster" step in the proposed algorithm is very complicated. I'm wondering if this step can be decomposed into the following substeps to make it easier to explain: the estimation of Omega_k is composed of two separate tasks of 1) clustering the entries in each column to different data points, and 2) the assigning data points extracted from all columns to multiple subspaces. In fact, once one have solved task 1) above then the problem reduces to subspace clustering with missing data, which could be solvedby alternating between matrix completion and subspace clustering. - Since the overall problem is nonconvex, initialization is expected to be very important for the algorithm to achieve good performance. Can the authors comment on how their algorithm is initialized on real data experiments? Response to rebuttals: The updated statement of Theorem 1 in the rebuttal is now much better in terms of clarity and it appears to be very necessary to incorporate it into the final version. Also, given that the conditions in Theorem 1 and 2 are very much similar to those in [4] for a related problem, a more detailed discussion of their connections will help understand the merits of these results. I maintain my overall rating as above and recommend a weak acceptance for this work.
NIPS
Title Mixture Matrix Completion Abstract Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. N/A Completing a data matrix X has become an ubiquitous problem in modern data science, with motivations in recommender systems, computer vision, and networks inference, to name a few. One typical assumption is that X is low-rank. A more general model assumes that each column of X corresponds to one of several lowrank matrices. This paper generalizes these models to what we call mixture matrix completion (MMC): the case where each entry of X corresponds to one of several low-rank matrices. MMC is a more accurate model for recommender systems, and brings more flexibility to other completion and clustering problems. We make four fundamental contributions about this new model. First, we show that MMC is theoretically possible (well-posed). Second, we give its precise information-theoretic identifiability conditions. Third, we derive the sample complexity of MMC. Finally, we give a practical algorithm for MMC with performance comparable to the state-of-the-art for simpler related problems, both on synthetic and real data. 1 Introduction Matrix completion aims to estimate the missing entries of an incomplete data matrix X. One of its main motivations arises in recommender systems, where each row represents an item, and each column represents a user. We only observe an entry in X whenever a user rates an item, and the goal is to predict unseen ratings in order to make good recommendations. Related Work. In 2009, Candès and Recht [1] introduced low-rank matrix completion (LRMC), arguably the most popular model for this task. LRMC assumes that each column (user) can be represented as a linear combination of a few others, whence X is low-rank. Later in 2012, Eriksson et. al. [2] introduced high-rank matrix completion (HRMC), also known as subspace clustering with missing data. This more general model assumes that each column of X comes from one of several low-rank matrices, thus allowing several types of users. Since their inceptions, both LRMC and HRMC have attracted a tremendous amount of attention (see [1–27] for a very incomplete list). Paper contributions. This paper introduces an even more general model: mixture matrix completion (MMC), which assumes that each entry in X (rather than column) comes from one out of several low-rank matrices, and the goal is to recover the matrices in the mixture. Figure 1 illustrates the generalization from LRMC to HRMC and to MMC. One of the main motivations behind MMC is that users often share the same account, and so each column in X may contain ratings from several users. Nonetheless, as we show in Section 2, MMC is also a more accurate model for many other contemporary applications, including networks inference, computer vision, and metagenomics. This paper makes several fundamental contributions about MMC: – Well posedness. First, we show that MMC is theoretically possible if we observe the right entries and the mixture is generic (precise definitions below). 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. – Identifiability conditions. We provide precise information-theoretical conditions on the entries that need to be observed such that a mixture of K low-rank matrices is identifiable. These extend similar recent results of LRMC [3] and HRMC [4] to the setting of MMC. The subtlety in proving these results is that there could exist false mixtures that agree with the observed entries, even if the sampling is uniquely completable for LRMC and HRMC (see Example 1). In other words, there exits samplings that are identifiable for LRMC (and HRMC) but are not identifiable for MMC, and so in general it is not enough to simply have K times more samples. Hence, it was necessary to derive identifiability conditions for MMC, similar to those of LRMC in [3] and HRMC in [4]. We point out that in contrast to typical completion theory [1, 2, 5–20], these type of identifiability conditions are deterministic (not restricted to uniform sampling), and make no coherence assumptions. – Sample complexity. If X ∈ Rd×n is a mixture of K rank-r matrices, we show that with high probability, our identifiability conditions will be met if each entry is observed with probability O(K d max{r, log d}), thus deriving the sample complexity of MMC, which is the same as the sample complexity of HRMC [4], and simplifies to O( 1 d max{r, log d}) in the case of K = 1, which corresponds to the sample complexity of LRMC [3]. Intuitively, this means that informationtheoretically, we virtually pay no price for mixing low-rank matrices. – Practical algorithm. Our identifiability results follow from a combinatorial analysis that is infeasible in practice. To address this, we give a practical alternating algorithm for MMC whose performance (in the more difficult problem of MMC) is comparable to state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. 2 Motivating Applications Besides recommender systems, there are many important applications where data can be modeled as a mixture of low-rank matrices. Here are a few examples motivated by current data science challenges. Networks Inference. Estimating the topology of a network (internet, sensor networks, biological networks, social networks) has been the subject of a large body of research in recent years [28–34]. To this end, companies routinely collect distances between nodes (e.g., computers) that connect with monitors (e.g., Google, Amazon, Facebook) in a data matrix X. In a simplified model, if node j is in subnet k, then the jth column can be modeled as the sum of (i) the distance between node j and router k, and (ii) the distance between router k and each of the monitors. Hence, the columns (nodes) corresponding to each subnet form a low-rank matrix, which is precisely the model assumed by HRMC. However, depending on the network’s traffic, each node may use different routes to communicate at different times. Consequently, the same column in X may contain measurements from different low-rank matrices. In other words, distance matrices of networks are a mixture of low-rank matrices. Computer Vision. Background segmentation is one of the most fundamental and crucial tasks in computer vision, yet it can be tremendously challenging. The vectorized frames of a video can be modeled as columns with some entries (pixels) in a low-rank background, and some outlier entries, corresponding to the foreground. Typical methods, like the acclaimed Robust PCA (principal component analysis) [35–46], assume that the foreground is sparse and has no particular structure. However, in many situations this is not the case. For instance, since the location of an object in consecutive frames is highly correlated, the foreground can be highly structured. Similarly, the foreground may not be sparse, specially if there are foreground objects moving close to the camera (e.g., in a selfie). Even state-of-the-art methods fail in scenarios like these, which are not covered by current models (see Figure 3 for an example). In contrast, MMC allows to use one matrix in the mixture to represent the background, other matrices to represent foreground objects (small or large, even dominant), and even other matrices to account for occlusions and other illumination/visual artifacts. Hence, MMC can be a more accurate model for video segmentation and other image processing tasks, including inpainting [47] and face clustering, which we explore in our experiments. Metagenomics. One contemporary challenge in Biology is to quantify the presence of different types of bacteria in a system (e.g., the human gut microbiome) [48–52]. The main idea is to collect several DNA samples from such a system, and use their genomic information to count the number of bacteria of each type (the genome of each bacterium determines its type). In practice, to obtain an organism’s genome (e.g., a person’s genome), biologists feed a DNA sample (e.g., blood or hair) to a sequencer machine that produces a series of reads, which are short genomic sequences that can later be assembled and aligned to recover the entire genome. The challenge arises when the sequencer is provided a sample with DNA from multiple organisms, as is the case in the human gut microbiome, where any sample will contain a mixture of DNA from multiple bacteria that cannot be disentangled into individual bacterium. In this case, each read produced by the sequencer may correspond to a different type of bacteria. Consequently, each DNA sample (column) may contain genes (rows) from different types of bacteria, which is precisely the model that MMC describes. 3 Problem Statement Let X1, . . . ,XK ∈ Rd×n be a set of rank-r matrices, and let Ω1, . . . ,Ωk ∈ {0, 1}d×n indicate disjoint sets of observed entries. Suppose X1, . . . ,XK and Ω1, . . . ,ΩK are unknown, and we only observe XΩ, defined as follows: – If the (i, j)th entry of Ωk is 1, then the (i, j)th entry of XΩ is equal to the (i, j) th entry of Xk. – If the (i, j)th entry of Ωk is 0 for every k = 1, . . . ,K, then the (i, j)th entry of XΩ is missing. This way Ωk indicates the entries of XΩ that correspond to X k, and Ω := ∑K k=1 Ω k indicates the set of all observed entries. Since Ω1, . . . ,ΩK are disjoint, Ω ∈ {0, 1}d×n. Equivalently, each observed entry of XΩ corresponds to an entry in either X 1 or X2 or . . . or XK (i.e., there are no collisions). In words, XΩ contains a mixture of entries from several low-rank matrices. The goal of MMC is to recover all the columns of X1, . . . ,XK that have observations in XΩ (see Figure 1 to build some intuition). In our recommendations example, a column xω ∈ XΩ will contain entries from Xk whenever xω contains ratings from a user of the k th type. Similarly, the same column will contain entries from Xℓ whenever it also contains ratings from a user of the ℓth type. We would like to predict the preferences of both users, or more generally, all users that have ratings in xω . On the other hand, if xω has no entries from X k, then xω involves no users of the k th type, and so it would be impossible (and futile) to try to recover such column of Xk. In MMC, the matrices Ω 1, . . . ,ΩK play the role of the hidden variables constantly present in mixture problems. Notice that if we knew Ω1, . . . ,ΩK, then we could partition XΩ accordingly, and estimate X 1, . . . ,XK using standard LRMC. The challenge is that we do not know Ω1, . . . ,ΩK. 3.1 The Subtleties of MMC The main theoretical difficulty of MMC is that depending on the pattern of missing data, there could exist false mixtures. That is, matrices X̃1, . . . , X̃K, other than X1, . . . ,XK, that agree with XΩ, even if X1, . . . ,XK are observed on uniquely completable patterns for LRMC. Example 1. Consider the next rank-1 matrices X1,X2, and their partially observed mixture XΩ: X 1 = 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 , X2 = 1 2 3 4 2 4 6 8 3 6 9 12 4 8 12 16 5 10 15 20 , XΩ = 1 · 3 4 1 2 · 8 3 2 3 · 4 8 3 4 · 10 15 4 . We can verify that X1 and X2 are observed on uniquely completable sampling patterns for LRMC [3]. Nonetheless, we can construct the following false rank-1 matrices that agree with XΩ: X̃ 1 = 60 40 15 4 1 2/3 1/4 1/15 3 2 3/4 1/5 12 8 3 4/5 60 40 15 4 , X̃2 = 1 1/4 3 1 8 2 24 8 1 1/4 3 1 4 1 12 4 40 10 120 40 . This shows that even with unlimited computational power, if we exhaustively search all the identifiable patterns for LRMC, we can end up with false mixtures. Hence the importance of studying the identifiable patterns for MMC. False mixtures arise because we do not know a priori which entries of XΩ correspond to each X k. Hence, it is possible that a rank-r matrix X̃ agrees with some entries from X1, other entries from X2, and so on. Furthermore, X̃ may even be the only rank-r matrix that agrees with such combination of entries, as in Example 1. Remark 1. Recall that LRMC and HRMC are tantamount to identifying the subspace(s) containing the columns of X [3, 4]. In fact, if we knew such subspaces, LRMC and HRMC become almost trivial problems (see Appendix A for details). Similarly, if no data is missing, HRMC simplifies to subspace clustering, which has been studied extensively, and is now reasonably well-understood [53–62]. In contrast, MMC remains challenging even if the subspaces corresponding to the low-rank matrices in the mixture are known, and even X is fully observed. We refer the curious reader to Appendix A, and point out the bottom row and the last column in Figure 2, which show the MMC error when the underlying subspaces are known, and when X is fully observed. 4 Main Theoretical Results Example 1 shows the importance of studying the identifiable patterns for MMC, which we do now. First recall that r + 1 samples per column are necessary for LRMC [3]. This implies that even if an oracle told us Ω1, . . . ,ΩK, if we intend to recover a column of Xk, we need to observe it on at least r + 1 entries. Hence we assume without loss of generality that: (A1) Each column of Ωk has either 0 or r + 1 non-zero entries. In words, A1 requires that each column of Xk to be recovered is observed on exactly r + 1 entries. Of course, observing more entries may only aid completion. Hence, rather than an assumption, A1 describes the most difficult scenario where we have the bare minimum amount of information required for completion. We use A1 to ease notation, exposition and analysis. All our results can be easily extended to the case where A1 is droped (see Remark 2). Without further assumptions on X, completion (of any kind) may be impossible. To see this consider the simple example where X is only supported on the ith row. Then it would be impossible to recover X unless all columns were observed on the ith row. In most completion applications this would be unlikely. For example, in a movies recommender system like Netflix, this would require that all the users watched (and rated) the same movie. To rule out scenarios like these, typical completion theory requires incoherence and uniform sampling. Incoherence guarantees that the information is well-spread over the matrix. Uniform sampling guarantees that all rows and columns are sufficiently sampled. However, it is usually unclear (and generally unverifiable) whether an incomplete matrix is coherent. Furthermore, observations are hardly ever uniformly distributed. For instance, we do not expect children to watch adults movies. To avoid these issues, instead of incoherence we will assume that X is a generic mixture of low-rank matrices. More precisely, we assume that: (A2) X1, . . . ,XK are drawn independently according to an absolutely continuous distribution with respect to the Lebesgue measure on the determinantal variety (set of all d × n, rank-r matrices). A2 essentially requires that each Xk is a generic rank-r matrix. This type of genericity assumptions are becoming increasingly common in studies of LRMC, HRMC, and related problems [3, 4, 23– 27, 46]. See Appendix C for a further discussion on A2, and its relation to other common assumptions from the literature. With this, we are ready to present our main theorem. It gives a deterministic condition on Ω to guarantee that X1, . . . ,XK can be identified from XΩ. This provides information-theoretic requirements for MMC. The proof is in Appendix B. Theorem 1. Let A1-A2 hold. Suppose there exist matrices {Ωτ} r+1 τ=1 formed with disjoint subsets of (d− r + 1) columns of Ωk, such that for every τ : (†) Every matrix Ω′ formed with a proper subset of the columns in Ωτ has at least r fewer columns than non-zero rows. Then all the columns of Xk that have observations in XΩ are identifiable. In words, Theorem 1 states that MMC is possible as long as we observe the right entries in each Xk. The intuition is that each of these entries imposes a constraint on what X1, . . . ,XK may be, and the pattern in Ω determines whether these constraints are redundant. Patterns satisfying the conditions of Theorem 1 guarantee that X1, . . . ,XK is the only mixture that satisfies the constraints produced by the observed entries. Remark 2. Recall that r + 1 samples per column are strictly necessary for completion. A1 requires that we have exactly that minimum number of samples. If Xk is observed on more than r + 1 entries per column, it suffices that Ωk contains a pattern satisfying the conditions of Theorem 1. Theorem 1 shows that MMC is possible if the samplings satisfy certain combinatorial conditions. Our next result shows that if each entry of Xk is observed on XΩ with probability O( 1 d max{r, log d}), then with high probability Ωk will satisfy such conditions. The proof is in Appendix B. Theorem 2. Suppose r ≤ d 6 and n ≥ (r + 1)(d− r + 1). Let ǫ > 0 be given. Suppose that an entry of XΩ is equal to the corresponding entry of X k with probability p ≥ 2 d max { 2r, 12 ( log(d ǫ ) + 1 )} . Then Ωk satisfies the sampling conditions of Theorem 1 with probability ≥ 1− 2(r + 1)ǫ. Theorem 2 shows that the sample complexity of MMC is O(Kmax{r, log d}) observations per column of XΩ. This is exactly the same as the sample complexity of HRMC [4], and simplifies to O(max{r, log d}) if K = 1, corresponding to the sample complexity of LRMC [3]. Intuitively, this means that information-theoretically, we virtually pay no price for mixing low-rank matrices. 5 Alternating Algorithm for MMC Theorems 1 and 2 show that MMC is theoretically possible under reasonable conditions (virtually the same as LRMC and HRMC). However, these results follow from a combinatorial analysis that is infeasible in practice (see Appendix B for details). To address this, we derive a practical alternating algorithm for MMC, which we call AMMC (alternating mixture matrix completion). The main idea is that MMC, like most mixture problems, can be viewed as a clustering task: if we could determine the entries of XΩ that correspond to each X k, then we would be able to partition XΩ into K incomplete low-rank matrices, and then complete them using standard LRMC. The question is how to determine which entries of XΩ correspond to each X k, i.e., how to determine Ω1, . . . ,ΩK. To address this, let Uk ∈ Rd×r be a basis for the subspace containing the columns of Xk, and let xω denote the j th column of XΩ, observed only on the entries indexed by ω ⊂ {1, . . . , d}. For any subspace, matrix or vector that is compatible with a set of indices ·, we use the subscript · to denote its restriction to the coordinates/rows in ·. For example, Uk ω ∈ R|ω|×r denotes the restriction of Uk to the indices in ω. Suppose xω contains entries from X k, and let ωk ⊂ ω index such entries. Then our goal is to determine ωk, as that would tell us the jth column of Ωk. Since x ω k ∈ span{Uk ω k}, we can restate our goal as finding the set ωk ⊂ ω such that x ω k ∈ span{Uk ω k}. To find ωk, let υ ⊂ ω, and let Pk υ := Uk υ (UkT υ U k υ )−1UkT υ denote the projection operator onto span{Uk υ }. Recall that ‖Pk υ xυ‖ ≤ ‖xυ‖, with equality if and only if xυ ∈ span{U k υ }. It follows that ωk is the largest set υ such that ‖Pk υ xυ‖ = ‖xυ‖. In other words, ω k is the solution to argmax υ⊂ω ‖Pk υ xυ‖ − ‖xυ‖ + |υ|. (1) However, (1) is non-convex. Hence, in order to find the solution to (1), we propose the following erasure strategy. The main idea is to start our search with υ = ω, and then iteratively remove the entries (coordinates) of υ that most increase the gap between ‖Pk υ xυ‖ and ‖xυ‖ (hence the term erasure). We stop this procedure when ‖Pk υ xυ‖ is equal to ‖xυ‖ (or close enough). More precisely, we initialize υ = ω, and then iteratively redefine υ as the set υ = υ\i, where i = argmax i∈υ ‖Pk υ\ixυ\i‖ − ‖xυ\i‖. (2) In words, i is the coordinate of the vector xυ such that if ignored, the gap between the remaining vector x υ\i and its projection P k υ\ixυ\i is reduced the most. At each iteration we remove (erase) such coordinate i from υ. The intuition behind this approach is that the coordinates of xυ that do not correspond to Xk are more likely to increase the gap between ‖Pk υ xυ‖ and ‖xυ‖. Notice that if Uk is in general position (guaranteed by A2) and |υ| ≤ r, then Uk υ = R|υ| (because Uk is r-dimensional). In such case, it is trivially true that xυ ∈ span{U k υ }, whence ‖Pk υ xυ‖ = ‖xυ‖. Hence the procedure above is guaranteed to terminate after at most ‖ω‖ − r iterations. At such point, |υ| = r, and we know that we were unable to find ωk (or a subset of it). One alternative is to start with a different υ0 ( ω, and search again. This procedure may remove some entries from ωk along the way, so in general, the output of this process will be a set υ ⊂ ωk. However, finding a subset of ωk is enough to find ωk. To see this, recall that since x ω k ∈ span{Uk ω k}, there is a coefficient vector θ k ∈ Rr such that x ω k = Uk ω kθ k. Since υ ⊂ ωk, it follows that xυ = U k υ θ k. Furthermore, since |υ| ≥ r, we can find θk as θ k = (UkT υ U k υ )−1UkT υ xυ. Since xωk = U k ω kθ k, at this point we can identify ωk by simple inspection (the matching entries in xω and U k ω θ k). Recall that ωk determines the jth column of Ω k. Hence, if we repeat the procedure above for each column in XΩ and each k, we can recover Ω 1, . . . ,ΩK. After this, we can use standard LRMC on XΩ1 , . . . ,XΩK to recover X 1, . . .XK (which is the ultimate goal of MMC). The catch here is that this procedure requires knowing Uk, which we do not know. So essentially we have a chicken and egg problem: (i) if we knew Uk, we would be able to find Ωk. (ii) If we knew Ω k we would be able to find Uk (and Xk, using standard LRMC on XΩk). Since we know neither, we use a common technique for these kind of problems: alternate between finding Ωk and Uk. More precisely, we start with some initial guesses Û1, . . . , ÛK, and then alternate between the following two steps until convergence: (i) Cluster. Let xω be the j th column in XΩ. For each k = 1, . . . ,K, we first erase entries from ω to obtain a set υ ⊂ ω indicating entries likely to correspond to Xk. This erasure procedure initializes υ = ω, and then repeats (2), (replacing Pk with P̂k, which denotes the projection operator onto span{Ûk}) until we to obtain a set υ ⊂ ω such that the projection ‖P̂k υ xυ‖ is close to ‖xυ‖. This way, the entries of xυ are likely to correspond to X k. Using these entries, we can estimate the coefficient of the jth column of Xk with respect to Uk, given by θ̂k = (ÛkT υ kÛ k υ k) −1 Û kT υ kxυk . With θ̂ k we can also estimate the jth column of Xk as x̂ k := Ûkθ̂k. Notice that both υ and x̂k are obtained using Ûk, which may be different from U k. It follows that υ may contain some entries that do not correspond to Xk, and x̂k may be inaccurate. Hence, in general, xω and x̂ k ω will have no matching entries, and so we cannot identify ωk by simple inspection, as before. However, we can repeat our procedure for each k to obtain estimates x̂1 ω , . . . , x̂K ω , and then assign each entry of xω to its closest match. More precisely, our estimate ω̂k ⊂ ω (indicating the entries of xω that we estimate that correspond to Xk) will contain entry i ∈ ω if |xi − x̂ k i | ≤ |xi − x̂ ℓ i | for every ℓ = 1, . . . ,K. Repeating this procedure for each column of XΩ will produce estimates Ω̂ 1, . . . , Ω̂K. Specifically, the jth column of Ω̂k ∈ {0, 1}d×n will contain a 1 in the rows indicated by ω̂k. (ii) Complete. For each k, complete X Ω̂k using your favorite LRMC algorithm. Then compute a new estimate Ûk given by the leading r left singular vectors of the completion of X Ω̂k . The entire procedure is summarized in Algorithm 1, in Appendix D, where we also discuss initialization, generalizations to noise and outliers, and other simple extensions to improve performance. 6 Experiments Simulations. We first present a series of synthetic experiments to study the performance of AMMC (Algorithm 1). In our simulations we first generate matrices Uk ∈ Rd×r and Θk ∈ Rr×n with i.i.d. N(0, 1) entries to use as bases and coefficients of the low-rank matrices in the mixture, i.e., X k = UkΘk ∈ Rd×n. Here d = n = 100, r = 5 and K = 2. With probability (1− p), the (i, j)th entry of XΩ will be missing, and with probability p/K it will be equal to the corresponding entry in X k. Recall that similar to EM and other alternating approaches, AMMC depends on initialization. Hence, we study the performance of AMMC as a function of both p and the distance δ ∈ [0, 1] between {Uk} and their initial estimates (measured as the normalized Frobenius norm of the difference between their projection operators). We measure accuracy using the normalized Frobenius norm of the difference between each Xk and its completion. We considered a success if this quantity was below 10−8. The results of 100 trials are summarized in Figure 2. Notice that the performance of AMMC decays nicely with the distance δ between the true subspaces U k and their initial estimates. We can see this type of behavior in similar state-of-the-art alternating algorithms for the simpler problem of HRMC [19]. Since MMC is highly non-convex, it is not surprising that if the initial estimates are poor (far from the truth), then AMMC may converge to a local minimum. Similarly, the performance of AMMC decays nicely with the fraction of observed entries p. Notice that even if X is fully observed (p = 1), if the initial estimates are very far from the true subspaces (δ = 1), then AMMC performs poorly. This shows, consistent with our discussing in Remark 1, that in practice MMC is a challenging problem even if X is fully observed. Hence, it is quite remarkable that AMMC works most of the time with as little as p ≈ 0.6, corresponding to observing ≈ 0.3 of the entries in each Xk. To put this under perspective, notice (Figure 2) that this is comparable the amount of missing data tolerated by GSSC [19] and LMaFit [11], which are state-of-the-art for the simpler problems of HRMC (special case of MMC where all entries in each column of X correspond to the same Xk) and LRMC (special case where there is only one Xk). To obtain Figure 2 we replicated the same setup as above, but with data generated according to the HRMC and LRMC models. Hence, we conclude that the performance of AMMC (in the more difficult problem of MMC) is comparable to the performance of state-of-the-art algorithms for the much simpler problems of HRMC and LRMC. We point out that according to Theorems 1 and 2, MMC is theoretically possible with p ≥ 1/2. However, we can see that (even if U1, . . . ,UK are known, corresponding to δ = 0 in Figure 2) the performance of AMMC is quite poor if p < 0.6. This shows two things: (i) MMC is challenging even if U1, . . . ,UK are known (as discussed in Remark 1), and (ii) there is a gap between what is information-theoretically possible and what is currently possible in practice (with AMMC). In future work we will explore algorithms that can approach the information-theoretic limits. Real Data: Face Clustering and Inpainting. It is well-known that images of an individual’s face are approximately low-rank [63]. Natural images, however, usually contain faces of multiple individuals, often partially occluding each other, resulting in a mixture of low-rank matrices. In this experiment we demonstrate the power of MMC in two tasks: first, classifying partially occluded faces in an image, and second, image inpainting [47]. To this end, we use the Yale B dataset [64], containing 2432 photos of 38 subjects (64 photos per subject), each photo of size 48× 42. We randomly select two subjects, and vectorize and concatenate their images to obtain two approximately rank-10 matrices X 1,X2 ∈ R2016×64. Next we combine them into X ∈ R2016×64, whose each entry is equal to the corresponding entry in X1 or X2 with equal probability. This way, each column of X contains a mixed image with pixels from multiple individuals. We aim at two goals: (i) classify the entries in X according to X1 and X2, which in turn means locating and classifying the face of each individual in each image, and (ii) recover X1 and X2 from X, thus reconstructing the unobserved pixels in each image (inpainting). We repeat this experiment 30 times using AMMC (with gaussian random initialization, known to produce near-orthogonal subspaces with high probability), obtaining a pixel classification error of 2.98%, and a reconstruction error of 4.1%, which is remarkable in light that the ideal rank-10 approximation (no mixture, and full data) achieves 1.8%. Figure 3 shows an example, with more in Figure 4 in Appendix E. Notice that in this case we cannot compare against other methods, as AMMC is the first, and currently the only method for MMC. Real Data: MMC for Background Segmentation. As discussed in Section 2, Robust PCA models a video as the superposition of a low-rank background plus a sparse foreground with no structure. MMC brings more flexibility, allowing multiple low-rank matrices to model background, structured foreground objects (sparse or abundant) and illumination artifacts, while at the same time also accounting for outliers (the entries/pixels that were assigned to no matrix in the mixture). In fact, contrary to Robust PCA, MMC allows a very large (even dominant) fraction of outliers. In this experiment we test AMMC in the task of background segmentation, using the Wallflower [65] and the I2R [66] datasets, containing videos of traffic cameras, lobbies, and pedestrians in the street. For each video, we compare AMMC (with gaussian random initialization) against the best result amongst the following state-of-the-art algorithms for Robust PCA: [35–39]. We chose these methods based on the comprehensive review in [40], and previous reports [41–43] indicating that these algorithms typically performed as well or better than several others, including [44, 45]. In most cases, both Robust PCA and AMMC perform quite similarly (see Figure 5 in Appendix E). However, in one case AMMC achieves 87.67% segmentation accuracy (compared with the ground truth, manually segmented), while Robust PCA only achieves 74.88% (Figure 3). Our hypothesis is that this is due to the large portion of outliers (foreground). It is out of the scope of this paper, but of interest for future work, to collect real datasets with similar properties, where AMMC can be further tested. We point out, however, that AMMC is orders of magnitude slower than Robust PCA. Our future work will also focus on developing faster methods for MMC.
1. What is the focus of the paper, and what are the proposed contributions? 2. What are the strengths and weaknesses of the theoretical analysis, particularly regarding sample complexity and lower bounds? 3. How does the reviewer assess the novelty and applicability of the proposed problem, Mixture Matrix Completion (MMC)? 4. What are the concerns regarding the clarity and statement of the theorems, especially Theorem 2? 5. What are the issues with the alternating algorithm (AMMC), such as initialization and computational complexity? 6. What are the limitations of the real-data experiments, specifically regarding downsampling and image segmentation? 7. Are there any typos or minor errors in the paper that need correction?
Review
Review This paper proposes a new variation of the matrix completion problem, called mixture matrix completion (MMC), where each entry of the matrix is drawn from one of a few low-rank matrices, rather than the same low-rank matrix. The proposed problem seems to be valid with motivations from a few applications. This paper makes two contributions: 1) an information-theoretical lower bound on the sample complexity; and 2) a heuristic algorithm to solve the MMC problem based on alternating minimization. The paper is written clearly with sufficient backgrounds information and provides extensive numerical experiments. The theoretical results of the paper are rather weak. The info-theoretical bound is straightforward and directly follows from previous studies on the matrix completion problem in [3], and [4] combined with a combinatorial enumeration. -Moreover, the statement of Theorem 1 is a bit hard to follow, and in some parts the meaning is unclear. For example, it is not clear how one can use Theorem 1 to verify if a given pattern can be used to solve the MMC problem in a computational-efficient way. Does "it is possible to" mean there exist an algorithm to recover ..? -In Theorem 2, it requires the number of columns needs to be about r times larger than the number of rows, which is a strong assumption. For example, this eliminates the applicability of this result on square matrices. Is this requirement always needed? For the alternating algorithm (AMMC), the main issues are 1) how to select the initialization in a data-driven manner or adaptively; I haven't found any discussions on it; 2) an analysis of the computational complexity of the proposed AMMC algorithm. A major issue in the real-data experiment is that the AMMC algorithm uses down-sampled data, and for such, many details in the images for the segmentation experiments are lost. For example, the background trees may be quite smoothed after down sampling, and much easier to separate. Therefore, the performance improvement showed in Figure 3 may not come directly from the new algorithm but an artifact of downsampling. In Figure 5 of the supplementary material, row 3 and row 7, there show more people than appeared in the original frame, can you explain why? In summary, the paper proposed an interesting problem (MMC) to study, but the results are rather immature to be published in its current form. Small typos: -line 168, the word "entries" appeared twice -line 170, the word "and" appeared twice update: I have read the authors' rebuttals and below are updates of my review. First, thanks for the authors' clarifications of many aspects of the work which helped my understanding. My main concerns are: - clarity as well as the novelty of the theory: The theorems of this paper are built heavily on existing results in [3] and [4] and the additional arguments appear to me as incremental. Furthermore, the applicability of Theorem 2 to square matrices is still unclear form the rebuttal; the authors claim it is applicable but it is not clear how since there is an assumption in Theorem 2 that explicitly prevents it from being applied. It seems a lot of work are needed to make the statements of the theorems clear (I do appreciate the authors' efforts in the rebuttal to make them more clear than the submitted version already); - the authors' acknowledged the unfairness in the comparison between RPCA with full data and their algorithm with subsampled data and updated the simulations. I am not sure why the tree in the backgrounds, while stay blurred in the reconstruction of the backgrounds, its residual didn't show up in the foreground in the authors' algorithm.
NIPS
Title Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks Abstract Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilongreedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost, and make the code to reproduce our results available at https://github.com/ibm/sau-explore. 1 Introduction The exploration-exploitation dilemma is a fundamental problem in models of decision making under uncertainty in various areas of statistics, economics, machine learning, game theory, adaptive control and management. Given a set of actions associated with unknown probabilistic rewards, an agent has to decide whether to exploit familiar actions to maximizing immediate reward or to explore poorly understood or unknown actions for potentially finding ways to improve future rewards. ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Quantifying the uncertainty associated with the value of each action is a key component of conventional algorithms for addressing the exploration-exploitation dilemma. In particular, it is central to the two most successful exploration strategies commonly adopted in bandit settings: Upper Confidence Bound (UCB) and Thompson Sampling. The UCB algorithm [1–11] follows the principle of optimism in the face of uncertainty, which promotes exploration by maintaining confidence sets for action-value estimates and then choosing actions optimistically within these confidence sets. Thompson Sampling (TS), introduced by [12] and successfully applied in a wide range of settings [13–16], is based on the principle of sampling in the face of uncertainty, meaning that it samples actions from the posterior distribution over action-values given past rewards. In modern reinforcement learning (RL), the flexible generalization capabilities of neural networks brought about by Deep RL have proven successful in tackling complex environments by learning mappings from high-dimensional observations directly to value estimates [17]. However, obtaining uncertainty measures over complex value functions like neural network models becomes challenging because of the intractability of estimating and updating posteriors over their parameters, limiting the applicability of Bayesian exploration strategies like UCB and TS. Recently, several proposals to address this challenge have been put forth that rely on approximations of the posterior over value functions. Unfortunately, these methods tend to underperform empirically compared to much simpler heuristics. For instance, [18] showed that in contextual bandit tasks the main approximate Bayesian posterior methods for deep neural networks are consistently beaten by simple baselines such as combining neural network value functions with a basic exploration strategy like epsilon-greedy, or using simple action-values like linear regression where the exact posterior can be computed. In this paper we propose a novel uncertainty measure which departs from the Bayesian approach of estimating the uncertainty over the parameters of the value prediction model. Our uncertainty measure, which we call Sample Average Uncertainty (SAU) is a frequentist quantity that only depends on the value prediction of each action. In particular, unlike UCB and TS, exploration based on SAU does not require the costly computation of a posterior distribution over models in order to estimate uncertainty of their predictions. In fact, instead of first estimating the uncertainty over the parameters of the value function to then use it to quantify the uncertainty over outcomes, SAU directly estimates uncertainty over outcomes by measuring the variance of sample averages. This result is then plugged into the current estimate of the outcome model. With our new measure of uncertainty of the expected action-values, we build two SAU-based exploration strategies: one based on the principle of “optimism in the face of SAU” that we name SAU-UCB, and a second one based on “sampling in the face of SAU” that we name SAU-Sampling. We investigate the use of these new exploration strategies to tackle contextual bandit problems, and show that SAU is closely related to the mean-squared error in contextual bandits. This allows us to show analytically that in the case of Bernoulli multi-armed bandits the SAU measure converges to the uncertainty of the action-value estimates that are obtained by TS, despite SAU being much simpler to compute and not needing to rely on maintaining the posterior distribution. In addition, we derive an upper bound on the expected regret incurred by our SAU algorithms in multi-armed bandits that shows that they achieve the optimal logarithmic regret. Finally, we empirically study the deployment of SAU-UCB and SAU-Sampling in the deep bandit setting and use them as exploration strategy for deep neural network value function models. Concretely, we follow the study of [18] and show that SAU consistently outranks the deep Bayesian bandit algorithms that they analyzed on the benchmarks that they proposed. 2 Problem Formulation: Contextual Bandits The contextual bandit problem is a paradigmatic model for the study of the exploration-exploitation trade-off and is formulated as follows. At each time step n we observe a context xn, select an action an from a set K = {1, . . . ,K}, after which we receive a reward rn. The value of an action a (in context xn ∈ Rp) is defined as the expected reward given that a is selected: E[rn|an = a] = µ(xn,θa), (1) where in general the action-values µ(·) depend on unknown parameters θa ∈ Rp. Our goal is to design a sequential decision-making policy π that over time learns the action parameters θa which maximize the expected reward. This goal is readily quantified in terms of minimizing expected regret, where we say that at step n we incur expected regret max a′∈K {µ(xn,θa′)} − µ(xn,θan), (2) i.e. the difference between the reward received by playing the optimal action and the one following the chosen action an. One way to design a sequential decision-making policy π that minimizes expected regret is to quantify the uncertainty around the current estimate of the unknown parameters θa. TS for instance does this by sequentially updating the posterior of θa after each action and reward. This paper presents a novel and simpler alternative method to estimate uncertainty. 3 Exploration based on Sample Average Uncertainty 3.1 Sample Average Uncertainty (SAU) In this section, we begin with introducing our novel measure of uncertainty SAU. Let Ta denote the set of time steps when action a was chosen so far, and let na be the size of this set. Based on the na rewards {rn}n∈Ta obtained with action a, the sample mean reward given action a is: r̄a = n −1 a ∑ n∈Ta rn. At this point we reiterate that exploitation and exploration are customarily traded off against each other with a Bayesian approach that estimates the uncertainty of the action-values on the basis of a posterior distribution over their parameters given past rewards. Instead, we propose a frequentist approach that directly measures the uncertainty of the sample average rewards that was just computed. Direct calculation using eq. (1) then gives us that the variance of the sample mean reward is Var(r̄a) = σ̄2a/na, where σ̄ 2 a = n −1 a ∑ n∈Ta σ2n,a with σ 2 n,a = E [ (rn − µ(xn,θa))2 ] . Assuming that there is a sequence of estimators {θ̂n,a}n∈Ta of θa, we can replace θa with θ̂n,a at each n ∈ Ta to approximate σ̄2a with a convenient statistics τ2a defined as τ2a = n −1 a ∑ n∈Ta ( rn − µ(xn, θ̂n,a) )2 . (3) With this we get an approximate sample mean variance of V̂ar(r̄a) = τ2a/na. (4) The central proposal of this paper is to use V̂ar(r̄a) as a measure of the uncertainty of the decision sequence. We call this quantity Sample Average Uncertainty (SAU), since it measures directly the uncertainty of sample mean rewards r̄a. In practice, τ2a can be updated incrementally as follows: 1. Compute the prediction residual: en = rn − µ(xn, θ̂n,an); (5) 2. Update Sample Average Uncertainty (SAU): τ2an ← τ 2 an + n −1 an [ e2n − τ2an ] . (6) Let us take a moment to contrast the uncertainty measure given by SAU and existing exploration algorithms like TS, which as we said would estimate the uncertainty of the action-value function µ(·) by maintaining and updating a distribution over its parameters θa. SAU instead directly quantifies the uncertainty associated with each action by measuring the uncertainty of the sample average rewards. The clear advantage of SAU is that it is simple and efficient to compute: all it requires are the prediction residuals rn − µ(xn, θ̂n,an) without any need to model or access the uncertainty of µ(xn, θ̂n,a). Because of the simplicity of its implementation, SAU can be naturally adapted to arbitrary action-value functions. In particular, it can be used to implement an exploration strategy for action-value function parameterized as deep neural networks or other model classes for which TS would be infeasible because of the intractability of computing a probability distribution over models. Note that in updating τ2a we use the residuals obtained at each step rather than re-evaluating them using later estimates. This is a design choice motivated by the goal of minimizing the computation cost and implementation efficiency of SAU. Moreover, this choice can be justified from the viewpoint of the statistical efficiency, since, as the number of training samples increases, the impact of initial residuals will decrease, so that the benefit of re-evaluating them incurs diminishing returns. Proposition 3 formalizes this argument by showing that indeed τ2a as computed in eq. (6) is concentrated around its expectation. In addition, perhaps as importantly, the aim of SAU is to provide a quantity to support exploration. The effect of potentially inaccurate residuals in the initial steps may actually be beneficial due to the introduction of additional noise driving initial exploration. This might be in part at the root of the good empirical results. 3.2 SAU-based Exploration in Bandit Problems We now use the SAU measure to implement exploration strategies for (contextual) bandit problems. SAU-UCB. UCB is a common way to perform exploration. Central to UCB is the specification of an “exploration bonus” which is typically chosen to be proportional to the measure of uncertainty. Accordingly, we propose to use the SAU measure τ2a as exploration bonus. Specifically, given value predictions µ̂n,a = µ(xn, θ̂n,a) for each a at step n, we modify the values as µ̃n,a = µ̂n,a + √ n−1a τ2a log n, (7) then choose the action by an = arg maxa({µ̃n,a}a∈K). We call this implementation of UCB using SAU as exploration bonus: SAU-UCB. SAU-Sampling. “Sampling in the face of uncertainty” is an alternative exploration principle that we propose to implement with SAU in addition to UCB. This is inspired by TS which samples the success probability estimate µ̂a from its posterior distribution. Analogously, we propose to sample values from a parametric Gaussian distribution with a mean given by the value prediction and a variance given by σ̄2a. This results in sampling values µ̃n,a at each time n as: µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) , (8) then choosing the action by an = arg maxa({µ̃n,a}a∈K). We call this use of SAU inspired by TS, SAU-Sampling. SAU-UCB and SAU-Sampling are summarized in Algorithm 1. Algorithm 1 SAU-UCB and SAU-Sampling for bandit problems 1: Initialize: θ̂a, S2a = 1 and na = 0 for a ∈ K. 2: for n = 1, 2, . . . do 3: Observe context xn; 4: for a = 1, . . . ,K do 5: Calculate the prediction µ̂n,a = µ(xn; θ̂a) and τ2a = S 2 a/na; 6: Draw a sample µ̃n,a = µ̂n,a + √ τ2an −1 a log n (SAU-UCB) or µ̃n,a ∼ N ( µ̂n,a, n −1 a τ 2 a ) (SAU-Sampling); 7: end for 8: Compute an = arg maxa({µ̃n,a}a∈K) if n > K, otherwise an = n; 9: Select action an, observe reward rn; 10: Update θ̂an and increment nan ← nan + 1; 11: Update S2an ← S 2 an + e 2 n using prediction error calculated as en = rn − µ̂n,an ; 12: end for 3.3 Novelty and comparison with related approaches Using the variance estimation in MAB is not novel. For example [19] makes use of Bernstein’s inequality to refine confidence intervals by additionally considering the uncertainty from estimating variance of reward noise. Our approach is fundamentally different from it with two aspects. First, Algorithm 1 is to propose a novel measure to approximate the uncertainty of the estimate of the mean reward that would afford such a flexible implementation and can therefore directly extended and scaled up to complicated value models like deep neural networks. Second, our SAU quantity τ2 is the per-step squared prediction error, i.e., the average cumulative squared prediction error, as opposed to an estimate of the variance of the different arms. In fact, τ2 does not rely on the traditional variance estimation analyzed by[19], but is instead simply computed directly from the prediction. This difference makes SAU even easier to implement and adapt to settings like deep networks. The exploration bonus in Algorithm 1 is not a function of the observed context, though it is updated from historical observations of the context. The algorithm could indeed be extended to provide a quantification of reward uncertainty that is a function of the current context by, for instance, fitting the SAU quantity as a function of context. Clearly, this will come at the cost of substantially increasing the complexity of the algorithm. Therefore to avoid this additional complexity, we instead focus the paper on the development of the SAU quantity as a simple estimate of uncertainty to efficiently drive exploration. However, exploring this possibility is a potentially exciting direction for future work. 4 SAU in Multi-Armed Bandits 4.1 SAU Approximates Mean-squared Error and TS in Multi-armed Bandits Before considering the contextual bandits scenario, we analyze the measure of uncertainty provided by SAU in multi-armed bandits, and compare it to the uncertainty computed by TS. This will help motivate SAU and elucidate its functioning. We assume a multi-armed Bernoulli bandit, i.e. at each step n each action a ∈ K results in a reward sampled from rn ∼ Bernoulli(µa) with fixed (unknown) means µa ∈ [0, 1]. Assume that action a has been taken na times so far, and let µ̂a denote the sample averages of the rewards for each action. The prediction residual eq. (5) is en = rn − µ̂an and is the central quantity to compute SAU. TS in the case of Bernoulli bandits is typically applied by assuming that the prior follows a Beta distribution, i.e. the values are sampled from Beta(αa, βa) with parameters αa and βa for a ∈ K. Uncertainty around the estimated mean values are then quantified by its variance denoted by V̂a (see Appendix A.1). We then have the following proposition relating SAU and TS in Bernoulli bandits: Proposition 1 For Beta Bernoulli bandits the expectation of the average prediction residual e2n/nan is an approximate unbiased estimator of the expectation of the posterior variance V̂a in TS. Concretely: E[V̂an ] = E[e2n/nan ] +O ( n−2an ) . Proof Proof of Proposition 1 is provided in Appendix A.1. Proposition 1 says that SAU asymptotically approximates TS for Bernoulli bandits, despite not needing to assume a prior and update a posterior distribution over parameters. In Appendix A.3 we support this empirically by showing that in multi-armed bandits SAU rivals TS. The following proposition further characterizes the prediction residual: Proposition 2 For Bernoulli bandits the expectation of the prediction residual used in SAU satisfies E[e2n/nan ] = E[(rn − µ̂an)2/nan ] = E [ (µ̂an − µan)2 ] +O ( n−2an ) . Proof Proof of Proposition 2 is provided in Appendix A.2. Proposition 2 says that the prediction residual en = rn − µ̂an is an approximately unbiased estimator of the mean squared error E [ (µ̂an − µan)2 ] . This means that for Bernoulli bandits, SAU closely approximates the uncertainty of the action-value estimates. Armed with this characterization of the prediction residual rn−µ̂an in Proposition 2, we now quantify the performance of the estimator τ2a in eq. (3) in terms of its concentration around its expectation: Proposition 3 For δ ∈ [ 2 exp ( −σ2ana/(32c) ) , 1 ) , where σ2a is the variance of rj for j ∈ Ta and c a constant, we have Pr {∣∣τ2a − E [τ2a ]∣∣ ≥ σa√8c/(na log(δ/2))} ≤δ, Proof Proof of Proposition 3 is provided in Appendix A.4. Proposition 3 says that τ2a is concentrated around its expectation, and thus remains stable as it is being updated. In Appendix A.6 we also show that E [ τ2a ] → σ2a as na →∞, and in Appendix A.7 we derive an upper bound on the expected regrets of SAU-UCB and SAU-Sampling in multi-armed bandits proving that the optimal logarithmic regrets are achievable uniformly over time, which says that the theoretical performance of SAU rivals TS in multi-armed bandits. 4.2 SAU in Linear Contextual Bandits: Theoretical analysis We now show that the results in Proposition 2 also hold for another important bandit model beside Bernoulli bandits, i.e. linear contextual bandits defined by the following outcome model: rn = x > n θa + n,a, n = 1, 2, . . . , (9) where xn,θa ∈ Rp, and n,a are iid random variables with variance σ2a. Assume action awas selected na times. We obtain the least-squares estimator θ̂n,an = ( ∑ j∈Tn,an x>j xj) −1( ∑ j∈Tn,an x>j rj). Accordingly, the prediction and the prediction residual at step n are, respectively, µ̂n,an = x > n θ̂n,an and e 2 n = (rn − x>n θ̂n,an)2. (10) Denote hn = x>n ( ∑ j∈Tn,an x>j xj) −1xn. The mean squared error of x>n θ̂n,an is MSEn = E[(x>n θ̂n,an−x>n θan)2]. With direct calculation we see that MSEn = hnσ2an and that E [ e2n/nan ] = (1− hn)σ2an/nan . Therefore, we have the following proposition: Proposition 4 For linear contextual bandits (9) we have that E[e2n/nan ] = (hnnan)−1(1− hn) MSEn. Furthermore, assuming that there exist constants c1 and c2 so that c1/nan ≤ hn ≤ c2/nan , then c−12 (1− c2/nan) MSEn ≤ E [ e2n/nan ] ≤ c−11 (1− c1/nan) MSEn. Proposition 4 provides a lower and an upper bound for E [ e2n/nan ] in terms of MSEn, meaning that on average SAU is a conservative measure of the uncertainty around x>n θ̂n,an . Noting that 0 ≤ hj ≤ 1 and ∑ j∈Tn,an hj = p, the assumption that c1/nan ≤ hn ≤ c2/nan requires that hn does not dominate or is dominated by other terms hj , with j ∈ Tn,an , meaning that contexts should be “homogeneous” to a certain extent. To examine the robustness to violations of this assumption, in the simulation in Appendix B we empirically test the performance under a heavy-tailed t-distribution with df = 2. The results show that SAU works robustly even under such type of context inhomogeneity. 4.3 SAU in Linear Contextual Bandits: Empirical evaluation on synthetic data In this section, we present simulation results quantifying the performance of our SAU-based exploration algorithms in linear contextual bandits. We evaluate SAU on synthetically generated datasets to address two questions: (1) How does SAU’s performance compare against Thompson Sampling?, and (2) How robust is SAU in various parameter regimes? We consider three scenarios for K (the number of actions) and p (the context dimensionality): (a) K = 5, p = 5, (b) K = 20, p = 5, and (b) K = 5, p = 40. The horizon is N = 20000 steps. For each action a, parameters θa are drawn from a uniform distribution in [−1, 1], then normalized so that ‖θa‖ = 1. Next, at each step n context xn is sampled from a Gaussian distribution N (0p, Ip). Finally, we set the noise variance to be σ2 = 0.52 so that the signal-to-noise ratio equals 4. We compare our SAU-based exploration algorithms, SAU-UCB and SAU-Sampling to Thompson Sampling (“TS” in Fig. 1). For TS on linear model, we follow [18] and use Bayesian linear regression for exact posterior inference. We also consider the PrecisionDiag approximation for the posterior covariance matrix of θa with the same priors as in [18] (“TSdiag” in Fig. 1). Fig. 1a) shows regret as a function of step for (K, p) = (5, 5). From the figure we have two observations: SAU-Sampling is comparable to TS, and SAU-UCB achieves better regret than TS. In a) (K, p)=(5, 5) b) (K, p)=(20, 5) c) (K, p)=(5, 40) terms of cumulative regret SAU significantly outperforms TS and TSdiag. Figures 1b) and c) show the effects of larger K and p, respectively. The observations from Fig. 1a) still hold in these cases, implying that SAU’s performance is robust to an increase in action space and context dimension. We also consider four other cases: (1) the elements of θa are sampled from N (0, 1) then are normalized; (2) the model errors are correlated with AR(1) covariance structure with correlation ρ = 0.5; (3) the elements in xi are correlated with AR(1) covariance structure with correlation ρ = 0.5; and (4) the elements of xi are sampled from a heavy-tailed t-distribution with df = 2 and are truncated at 5. These results are shown in Appendix B and are consistent with the results in Fig. 1 confirming SAU’s robustness to various contextual linear bandit problems. 5 Deep Contextual Bandits 5.1 Deep Bayesian Bandit Algorithms Deep contextual bandits refers to tackling contextual bandits by parameterizing the action-value function as a deep neural network µ(x,θ), thereby leveraging models that have been very successful in the large-scale supervised learning [20] and RL [17]. Notice that in the deep setting we denote all parameters with θ = {θa}a∈K, as common in the neural network literature. In particular, θ includes the parameters that are shared across actions, as well as those of the last layer of the network which are specific to each action a. Algorithm 2 breaks down a generic deep contextual bandit algorithm in terms of an API exposing its basic subroutines: PREDICT (which outputs the set of action-values {µn,a}a∈K given the observation xn), ACTION (which selects an action given all the action-values), and UPDATE (which updates model parameters at the and of the step). In this scheme Thompson Sampling (TS) is implemented as in Algorithm 3, which underlines where TS promotes exploration by sampling from a distribution over model parameters Pn(θ). In principle this provides an elegant Bayesian approach to tackle the exploration-exploitation dilemma embodied Algorithm 2 Generic Deep Contextual Bandit algorithm 1: for n = 1, 2, . . . do 2: Observe context xn; 3: Compute values {µn,a}a∈K = PREDICT(xn); 4: Choose an = ACTION({µn,a}a∈K), observe reward rn; 5: UPDATE (rn, an,xn); 6: end for by contextual bandits. Unfortunately, representing and updating a posterior distribution over model parameters Pn(θ) exactly becomes intractable for complex models such as deep neural networks. Algorithm 3 Thompson Sampling for Deep Contextual Bandits 1: function PREDICT(xn) 2: Exploration: Sample model parameters from posterior distribution: θ̂n ∼ Pn(θ); 3: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n), where 4: function ACTION({µ̂n,a}a∈K) 5: Return an = arg maxa({µ̃n,a}a∈K); 6: function UPDATE(rn, an,xn) 7: Use triplet (rn, an,xn) to update posterior distribution: Pn+1(θ)← Pn(θ); To obviate this problem, several techniques that heuristically approximate posterior sampling have emerged, such as randomly perturbing network parameters [21–23], or bootstrapped sampling [24]. Within the scheme of Algorithm 2 the role of random perturbation and bootstrapped sampling are to heuristically emulate the model sampling procedure promoting exploration in the PREDICT subroutine (see TS Algorithm 3). However, systematic empirical comparisons recently demonstrated that simple strategies such as epsilon-greedy [17, 25] and Bayesian linear regression [26] remain very competitive compared to these approximate posterior sampling methods in deep contextual bandit. In particular, [18] showed that linear models where the posterior can be computed exactly, and epsilon-greedy action selection overwhelmingly outrank deep methods with approximate posterior sampling in a suite of contextual bandit benchmarks based on real-world data. 5.2 SAU for Deep Contextual Bandits We now re-examine the deep contextual bandits benchmarks in [18] and show that SAU can be seamlessly combined with deep neural networks, resulting in an exploration strategy whose performance is competitive with the best deep contextual bandit algorithms identified by [18]. Algorithm 4 shows the deep contextual bandit implementation of SAU. Notice that the PREDICT subroutine is remarkably simple, consisting merely in the forward step of the deep neural network value prediction model. In contrast to our extremely simple procedure, TS-based methods require at this step to (approximately) sample from the model posterior to implement exploration. In SAU exploration is instead taken care of by the ACTION subroutine, which takes the values as inputs and either explores through sampling from a distribution around the predicted values (SAU-Sampling) or through an exploration bonus added to them (SAU-UCB). SAU then selects the action corresponding to the maximum of these perturbed values. The UPDATE for SAU is also quite simple, and consists in updating the neural network parameters to minimize the reward prediction error loss ln following action selection using SGD via backprop, or possibly its mini-batch version (which would then be carried out on a batch of (rn, an,xn) triplets previously stored in a memory buffer). UPDATE then updates the count and the SAU measure τan for the selected action an. We notice that the simplicity of SAU for deep contextual bandits is akin to the simplicity of epsilongreedy, for which exploration is also implemented in the ACTION subroutine (see Algorithms 5 in Appendix E). In fact, comparing the two algorithms it is clear that SAU can be used as a drop-in replacement for epsilon-greedy exploration, making it widely applicable. Algorithm 4 SAU for Deep Contextual Bandits (SAU-Neural-Sampling and UCB) 1: function PREDICT(xn) 2: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n); 3: function ACTION({µ̂n,a}a∈K) 4: Exploration: Compute µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) (SAU-Sampling) 5: or µ̃n,a = µ̂n,a + √ τa log n/na (SAU-UCB); 6: Return an = arg maxa({µ̃n,a}a∈K); 7: function UPDATE(rn, an,xn) 8: Compute prediction error en = rn − µ̂n,an and loss ln = 12 (rn − µ̂n,an) 2 9: Update model parameters to θ̂n+1 using SGD with gradients ∂ln∂θ (or mini-batch version); 10: Update exploration parameters: nan ← nan + 1, S2an ← S 2 an + e 2 n τ 2 an = S 2 an/nan ; 5.3 Empirical Evaluation of SAU on Deep Contextual Bandit Problems Benchmarks and baseline algorithms. Our empirical evaluation of SAU’s performance in the deep contextual bandit setting is based on the experiments by [18], who benchmarked the main TS-based approximate posterior sampling methods over a series of contextual bandit problems. We test SAU on the same contextual bandit problems against 4 competing algorithms consisting in the 4 best ranking algorithms identified by [18], which are: LinearPosterior (a closed-form Bayesian linear regression algorithm for exact posterior inference under the assumption of a linear contextual bandit [27]), LinearGreedy (epsilon-greedy exploration under the assumption of a linear contextual bandit), NeuralLinear (Bayesian linear regression on top of the last layer of a neural network trained with SGD [28]) and NeuralGreedy (a neural network with epsilon-greedy exploration trained with SGD). We neglected a direct comparison with NeuralUCB [29], since its scaling in memory and computational requirements make it quickly impractical for even moderately sized applications of practical interest. Moreover, its reported performance is substantially worse than SAU-UCB. Implementations of SAU. We implemented and tested 4 versions of SAU on the benchmarks in [18]. In the Tables below we refer to them a follows: Linear-SAU-S and Linear-SAU-UCB refer to a linear regression model using SAU-Sampling and SAU-UCB as exploration strategies, respectively. NeuralSAU-S and Neural-SAU-UCB refer to a neural network model trained with SGD using SAU-Sampling and SAU-UCB, respectively. Empirical evaluation on the Wheel Bandit. The Wheel Bandit Problem is a synthetic bandit designed by [18] to study the performance of bandit algorithms as a function of the need for exploration in the environment by varying a parameter δ ∈ [0, 1] that smoothly changes the importance of exploration. In particular, the difficulty of the problem increases with δ, since the problem is designed so that for δ close to 1 most contexts have the same optimal action, while only for a fraction 1− δ2 of contexts the optimal action is a different more rewarding action (see [18] for more details). In Appendix C, Table 2 quantifies the performance of SAU-Sampling and SAU-UCB in terms of cumulative regret in comparison to the 4 competing algorithms, and normalized to the performance of the Uniform baseline, which selects actions uniformly at random. There we can see that Neural-SAU-S is consistently the best algorithm with lower cumulative regret for a wide rage of the parameter δ. Only for very high values of δ (δ = 0.99) the baseline algorithm NeuralLiner starts to overtake it, but even in this case, another variant of SAU, SAU-Linear-S still maintains the lead in performance. Empirical evaluation on real-world Deep Contextual Bandit problems. Table 1 quantifies the performance of SAU-Sampling and SAU-UCB in comparison to the 4 competing baseline algorithms, and normalized to the performance of the Uniform baseline. These results show that a SAU algorithm is the best algorithm in each of the 7 benchmarks in terms of minimizing cumulative regret over all samples. Neural-SAU-S or Neural-SAU-UCB are the best combination 6 out of 7 times, and linear regression with SAU-UCB is the best on the bandit built from the Adult dataset. The next best algorithm in terms of minimizing cumulative regret is NeuralLinear [18], which incurs cumulative regret that on average is 32% higher than Neural-SAU-S and 34% higher than Neural-SAU-UCB. As already mentioned, thanks to their implementation efficiency SAU-based algorithms are much less computation intensive than TS-based algorithms. This is reflected in the remarkably shorter execution time: on average Neural-SAU-S and Neural-SAU-UCB run more than 10 time faster than NeuralLinear [18] (see Appendix Table 5 for details), also making them extremely scalable. 6 Conclusion and Discussion Existing methods to estimate uncertainty tend to be impractical for complex value function models like deep neural networks, either because exact posterior estimation become unfeasible, or due to how approximate algorithms coupled with deep learning training amplify estimation errors. In this paper, we have introduced Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandit problems which sidesteps the mentioned problems plaguing Bayesian posterior methods. SAU only depends on the value prediction, in contrast to methods based on Thompson Sampling that instead require an estimate of the variability of the model parameters. As a result, SAU is immune to the negative effects that neural network parameterizations and optimization have on the quality of uncertainty estimation, resulting in reliable and robust exploration as demonstrated by our empirical studies. SAU’s implementation simplicity also makes it suitable as a drop-in replacement for epsilon-greedy action selection, resulting in a scalable exploration strategy that can be effortlessly deployed in large-scale and online contextual bandit scenarios. We also have provided theoretical justifications for SAU-based exploration by connecting SAU with posterior variance and mean-squared error estimation. However, the reasons why SAU is in practice consistently better than TS-based exploration in deep bandits is still not settled theoretically. We hypothesize that this might be due to two main reasons: (1) TS-based methods implement exploration by estimating the uncertainty of the internal model parameters, which might introduce estimation errors, while SAU directly estimates uncertainty at the model output; (2) in addition, the approximation error from the approximate posterior implementations of TS-based models might result in inefficient uncertainty measures of the internal model parameters. Because of the importance of contextual bandit algorithms for practical applications like for instance recommendation and ad servicing systems, we believe that it will be important to further theoretically refine these hypotheses to help mitigate the possible negative societal impacts that could result from deploying inefficient, miscalibrated or biased exploration algorithms. Another limitation of our work is that it developed SAU-based exploration in the specific and restricted case of bandits. Despite being an application of interest, we are excited and looking forward to further development that could extend methods based on SAU to more general sequential decision scenarios in RL beyond the bandit setting. Acknowledgements This work was partially supported by National Natural Science Foundation of China (No.11871459) and by Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01).
1. What is the focus and contribution of the paper on exploration with deep networks? 2. What are the strengths of the proposed approach, particularly in terms of computational efficiency? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the method's generalizability and potential applications in other reinforcement learning problems?
Summary Of The Paper Review
Summary Of The Paper The paper introduces a new method for exploration with deep networks. Both theoretical and empirical results are presented. Theoretical results include properties and guarantees of this new method in linear cases. Empirical results include simulations and a few real world datasets, which were used to benchmark contextual bandit algorithms. Review To the best of my knowledge, the method is new to this application. The contribution of this paper is clear to me. Overall, I am positive about this paper. The new method has the great advantage of computational efficiency, which is one of the big challenges for exploration with deep networks. I find the paper easy to follow and is clearly written. As a minor comment, it would be good to step back and also empirically show how well SAU approximates model uncertainty. This might be more helpful to convince the reader that SAU is generalizable that can be potentially applied to other RL problems. One previous paper that did similar evaluations is Ovadia et al, Neurips 2019. That paper specifically looks at datasets with distribution shifts with extensive benchmarks, which by no means is necessary for this paper.
NIPS
Title Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks Abstract Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilongreedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost, and make the code to reproduce our results available at https://github.com/ibm/sau-explore. 1 Introduction The exploration-exploitation dilemma is a fundamental problem in models of decision making under uncertainty in various areas of statistics, economics, machine learning, game theory, adaptive control and management. Given a set of actions associated with unknown probabilistic rewards, an agent has to decide whether to exploit familiar actions to maximizing immediate reward or to explore poorly understood or unknown actions for potentially finding ways to improve future rewards. ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Quantifying the uncertainty associated with the value of each action is a key component of conventional algorithms for addressing the exploration-exploitation dilemma. In particular, it is central to the two most successful exploration strategies commonly adopted in bandit settings: Upper Confidence Bound (UCB) and Thompson Sampling. The UCB algorithm [1–11] follows the principle of optimism in the face of uncertainty, which promotes exploration by maintaining confidence sets for action-value estimates and then choosing actions optimistically within these confidence sets. Thompson Sampling (TS), introduced by [12] and successfully applied in a wide range of settings [13–16], is based on the principle of sampling in the face of uncertainty, meaning that it samples actions from the posterior distribution over action-values given past rewards. In modern reinforcement learning (RL), the flexible generalization capabilities of neural networks brought about by Deep RL have proven successful in tackling complex environments by learning mappings from high-dimensional observations directly to value estimates [17]. However, obtaining uncertainty measures over complex value functions like neural network models becomes challenging because of the intractability of estimating and updating posteriors over their parameters, limiting the applicability of Bayesian exploration strategies like UCB and TS. Recently, several proposals to address this challenge have been put forth that rely on approximations of the posterior over value functions. Unfortunately, these methods tend to underperform empirically compared to much simpler heuristics. For instance, [18] showed that in contextual bandit tasks the main approximate Bayesian posterior methods for deep neural networks are consistently beaten by simple baselines such as combining neural network value functions with a basic exploration strategy like epsilon-greedy, or using simple action-values like linear regression where the exact posterior can be computed. In this paper we propose a novel uncertainty measure which departs from the Bayesian approach of estimating the uncertainty over the parameters of the value prediction model. Our uncertainty measure, which we call Sample Average Uncertainty (SAU) is a frequentist quantity that only depends on the value prediction of each action. In particular, unlike UCB and TS, exploration based on SAU does not require the costly computation of a posterior distribution over models in order to estimate uncertainty of their predictions. In fact, instead of first estimating the uncertainty over the parameters of the value function to then use it to quantify the uncertainty over outcomes, SAU directly estimates uncertainty over outcomes by measuring the variance of sample averages. This result is then plugged into the current estimate of the outcome model. With our new measure of uncertainty of the expected action-values, we build two SAU-based exploration strategies: one based on the principle of “optimism in the face of SAU” that we name SAU-UCB, and a second one based on “sampling in the face of SAU” that we name SAU-Sampling. We investigate the use of these new exploration strategies to tackle contextual bandit problems, and show that SAU is closely related to the mean-squared error in contextual bandits. This allows us to show analytically that in the case of Bernoulli multi-armed bandits the SAU measure converges to the uncertainty of the action-value estimates that are obtained by TS, despite SAU being much simpler to compute and not needing to rely on maintaining the posterior distribution. In addition, we derive an upper bound on the expected regret incurred by our SAU algorithms in multi-armed bandits that shows that they achieve the optimal logarithmic regret. Finally, we empirically study the deployment of SAU-UCB and SAU-Sampling in the deep bandit setting and use them as exploration strategy for deep neural network value function models. Concretely, we follow the study of [18] and show that SAU consistently outranks the deep Bayesian bandit algorithms that they analyzed on the benchmarks that they proposed. 2 Problem Formulation: Contextual Bandits The contextual bandit problem is a paradigmatic model for the study of the exploration-exploitation trade-off and is formulated as follows. At each time step n we observe a context xn, select an action an from a set K = {1, . . . ,K}, after which we receive a reward rn. The value of an action a (in context xn ∈ Rp) is defined as the expected reward given that a is selected: E[rn|an = a] = µ(xn,θa), (1) where in general the action-values µ(·) depend on unknown parameters θa ∈ Rp. Our goal is to design a sequential decision-making policy π that over time learns the action parameters θa which maximize the expected reward. This goal is readily quantified in terms of minimizing expected regret, where we say that at step n we incur expected regret max a′∈K {µ(xn,θa′)} − µ(xn,θan), (2) i.e. the difference between the reward received by playing the optimal action and the one following the chosen action an. One way to design a sequential decision-making policy π that minimizes expected regret is to quantify the uncertainty around the current estimate of the unknown parameters θa. TS for instance does this by sequentially updating the posterior of θa after each action and reward. This paper presents a novel and simpler alternative method to estimate uncertainty. 3 Exploration based on Sample Average Uncertainty 3.1 Sample Average Uncertainty (SAU) In this section, we begin with introducing our novel measure of uncertainty SAU. Let Ta denote the set of time steps when action a was chosen so far, and let na be the size of this set. Based on the na rewards {rn}n∈Ta obtained with action a, the sample mean reward given action a is: r̄a = n −1 a ∑ n∈Ta rn. At this point we reiterate that exploitation and exploration are customarily traded off against each other with a Bayesian approach that estimates the uncertainty of the action-values on the basis of a posterior distribution over their parameters given past rewards. Instead, we propose a frequentist approach that directly measures the uncertainty of the sample average rewards that was just computed. Direct calculation using eq. (1) then gives us that the variance of the sample mean reward is Var(r̄a) = σ̄2a/na, where σ̄ 2 a = n −1 a ∑ n∈Ta σ2n,a with σ 2 n,a = E [ (rn − µ(xn,θa))2 ] . Assuming that there is a sequence of estimators {θ̂n,a}n∈Ta of θa, we can replace θa with θ̂n,a at each n ∈ Ta to approximate σ̄2a with a convenient statistics τ2a defined as τ2a = n −1 a ∑ n∈Ta ( rn − µ(xn, θ̂n,a) )2 . (3) With this we get an approximate sample mean variance of V̂ar(r̄a) = τ2a/na. (4) The central proposal of this paper is to use V̂ar(r̄a) as a measure of the uncertainty of the decision sequence. We call this quantity Sample Average Uncertainty (SAU), since it measures directly the uncertainty of sample mean rewards r̄a. In practice, τ2a can be updated incrementally as follows: 1. Compute the prediction residual: en = rn − µ(xn, θ̂n,an); (5) 2. Update Sample Average Uncertainty (SAU): τ2an ← τ 2 an + n −1 an [ e2n − τ2an ] . (6) Let us take a moment to contrast the uncertainty measure given by SAU and existing exploration algorithms like TS, which as we said would estimate the uncertainty of the action-value function µ(·) by maintaining and updating a distribution over its parameters θa. SAU instead directly quantifies the uncertainty associated with each action by measuring the uncertainty of the sample average rewards. The clear advantage of SAU is that it is simple and efficient to compute: all it requires are the prediction residuals rn − µ(xn, θ̂n,an) without any need to model or access the uncertainty of µ(xn, θ̂n,a). Because of the simplicity of its implementation, SAU can be naturally adapted to arbitrary action-value functions. In particular, it can be used to implement an exploration strategy for action-value function parameterized as deep neural networks or other model classes for which TS would be infeasible because of the intractability of computing a probability distribution over models. Note that in updating τ2a we use the residuals obtained at each step rather than re-evaluating them using later estimates. This is a design choice motivated by the goal of minimizing the computation cost and implementation efficiency of SAU. Moreover, this choice can be justified from the viewpoint of the statistical efficiency, since, as the number of training samples increases, the impact of initial residuals will decrease, so that the benefit of re-evaluating them incurs diminishing returns. Proposition 3 formalizes this argument by showing that indeed τ2a as computed in eq. (6) is concentrated around its expectation. In addition, perhaps as importantly, the aim of SAU is to provide a quantity to support exploration. The effect of potentially inaccurate residuals in the initial steps may actually be beneficial due to the introduction of additional noise driving initial exploration. This might be in part at the root of the good empirical results. 3.2 SAU-based Exploration in Bandit Problems We now use the SAU measure to implement exploration strategies for (contextual) bandit problems. SAU-UCB. UCB is a common way to perform exploration. Central to UCB is the specification of an “exploration bonus” which is typically chosen to be proportional to the measure of uncertainty. Accordingly, we propose to use the SAU measure τ2a as exploration bonus. Specifically, given value predictions µ̂n,a = µ(xn, θ̂n,a) for each a at step n, we modify the values as µ̃n,a = µ̂n,a + √ n−1a τ2a log n, (7) then choose the action by an = arg maxa({µ̃n,a}a∈K). We call this implementation of UCB using SAU as exploration bonus: SAU-UCB. SAU-Sampling. “Sampling in the face of uncertainty” is an alternative exploration principle that we propose to implement with SAU in addition to UCB. This is inspired by TS which samples the success probability estimate µ̂a from its posterior distribution. Analogously, we propose to sample values from a parametric Gaussian distribution with a mean given by the value prediction and a variance given by σ̄2a. This results in sampling values µ̃n,a at each time n as: µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) , (8) then choosing the action by an = arg maxa({µ̃n,a}a∈K). We call this use of SAU inspired by TS, SAU-Sampling. SAU-UCB and SAU-Sampling are summarized in Algorithm 1. Algorithm 1 SAU-UCB and SAU-Sampling for bandit problems 1: Initialize: θ̂a, S2a = 1 and na = 0 for a ∈ K. 2: for n = 1, 2, . . . do 3: Observe context xn; 4: for a = 1, . . . ,K do 5: Calculate the prediction µ̂n,a = µ(xn; θ̂a) and τ2a = S 2 a/na; 6: Draw a sample µ̃n,a = µ̂n,a + √ τ2an −1 a log n (SAU-UCB) or µ̃n,a ∼ N ( µ̂n,a, n −1 a τ 2 a ) (SAU-Sampling); 7: end for 8: Compute an = arg maxa({µ̃n,a}a∈K) if n > K, otherwise an = n; 9: Select action an, observe reward rn; 10: Update θ̂an and increment nan ← nan + 1; 11: Update S2an ← S 2 an + e 2 n using prediction error calculated as en = rn − µ̂n,an ; 12: end for 3.3 Novelty and comparison with related approaches Using the variance estimation in MAB is not novel. For example [19] makes use of Bernstein’s inequality to refine confidence intervals by additionally considering the uncertainty from estimating variance of reward noise. Our approach is fundamentally different from it with two aspects. First, Algorithm 1 is to propose a novel measure to approximate the uncertainty of the estimate of the mean reward that would afford such a flexible implementation and can therefore directly extended and scaled up to complicated value models like deep neural networks. Second, our SAU quantity τ2 is the per-step squared prediction error, i.e., the average cumulative squared prediction error, as opposed to an estimate of the variance of the different arms. In fact, τ2 does not rely on the traditional variance estimation analyzed by[19], but is instead simply computed directly from the prediction. This difference makes SAU even easier to implement and adapt to settings like deep networks. The exploration bonus in Algorithm 1 is not a function of the observed context, though it is updated from historical observations of the context. The algorithm could indeed be extended to provide a quantification of reward uncertainty that is a function of the current context by, for instance, fitting the SAU quantity as a function of context. Clearly, this will come at the cost of substantially increasing the complexity of the algorithm. Therefore to avoid this additional complexity, we instead focus the paper on the development of the SAU quantity as a simple estimate of uncertainty to efficiently drive exploration. However, exploring this possibility is a potentially exciting direction for future work. 4 SAU in Multi-Armed Bandits 4.1 SAU Approximates Mean-squared Error and TS in Multi-armed Bandits Before considering the contextual bandits scenario, we analyze the measure of uncertainty provided by SAU in multi-armed bandits, and compare it to the uncertainty computed by TS. This will help motivate SAU and elucidate its functioning. We assume a multi-armed Bernoulli bandit, i.e. at each step n each action a ∈ K results in a reward sampled from rn ∼ Bernoulli(µa) with fixed (unknown) means µa ∈ [0, 1]. Assume that action a has been taken na times so far, and let µ̂a denote the sample averages of the rewards for each action. The prediction residual eq. (5) is en = rn − µ̂an and is the central quantity to compute SAU. TS in the case of Bernoulli bandits is typically applied by assuming that the prior follows a Beta distribution, i.e. the values are sampled from Beta(αa, βa) with parameters αa and βa for a ∈ K. Uncertainty around the estimated mean values are then quantified by its variance denoted by V̂a (see Appendix A.1). We then have the following proposition relating SAU and TS in Bernoulli bandits: Proposition 1 For Beta Bernoulli bandits the expectation of the average prediction residual e2n/nan is an approximate unbiased estimator of the expectation of the posterior variance V̂a in TS. Concretely: E[V̂an ] = E[e2n/nan ] +O ( n−2an ) . Proof Proof of Proposition 1 is provided in Appendix A.1. Proposition 1 says that SAU asymptotically approximates TS for Bernoulli bandits, despite not needing to assume a prior and update a posterior distribution over parameters. In Appendix A.3 we support this empirically by showing that in multi-armed bandits SAU rivals TS. The following proposition further characterizes the prediction residual: Proposition 2 For Bernoulli bandits the expectation of the prediction residual used in SAU satisfies E[e2n/nan ] = E[(rn − µ̂an)2/nan ] = E [ (µ̂an − µan)2 ] +O ( n−2an ) . Proof Proof of Proposition 2 is provided in Appendix A.2. Proposition 2 says that the prediction residual en = rn − µ̂an is an approximately unbiased estimator of the mean squared error E [ (µ̂an − µan)2 ] . This means that for Bernoulli bandits, SAU closely approximates the uncertainty of the action-value estimates. Armed with this characterization of the prediction residual rn−µ̂an in Proposition 2, we now quantify the performance of the estimator τ2a in eq. (3) in terms of its concentration around its expectation: Proposition 3 For δ ∈ [ 2 exp ( −σ2ana/(32c) ) , 1 ) , where σ2a is the variance of rj for j ∈ Ta and c a constant, we have Pr {∣∣τ2a − E [τ2a ]∣∣ ≥ σa√8c/(na log(δ/2))} ≤δ, Proof Proof of Proposition 3 is provided in Appendix A.4. Proposition 3 says that τ2a is concentrated around its expectation, and thus remains stable as it is being updated. In Appendix A.6 we also show that E [ τ2a ] → σ2a as na →∞, and in Appendix A.7 we derive an upper bound on the expected regrets of SAU-UCB and SAU-Sampling in multi-armed bandits proving that the optimal logarithmic regrets are achievable uniformly over time, which says that the theoretical performance of SAU rivals TS in multi-armed bandits. 4.2 SAU in Linear Contextual Bandits: Theoretical analysis We now show that the results in Proposition 2 also hold for another important bandit model beside Bernoulli bandits, i.e. linear contextual bandits defined by the following outcome model: rn = x > n θa + n,a, n = 1, 2, . . . , (9) where xn,θa ∈ Rp, and n,a are iid random variables with variance σ2a. Assume action awas selected na times. We obtain the least-squares estimator θ̂n,an = ( ∑ j∈Tn,an x>j xj) −1( ∑ j∈Tn,an x>j rj). Accordingly, the prediction and the prediction residual at step n are, respectively, µ̂n,an = x > n θ̂n,an and e 2 n = (rn − x>n θ̂n,an)2. (10) Denote hn = x>n ( ∑ j∈Tn,an x>j xj) −1xn. The mean squared error of x>n θ̂n,an is MSEn = E[(x>n θ̂n,an−x>n θan)2]. With direct calculation we see that MSEn = hnσ2an and that E [ e2n/nan ] = (1− hn)σ2an/nan . Therefore, we have the following proposition: Proposition 4 For linear contextual bandits (9) we have that E[e2n/nan ] = (hnnan)−1(1− hn) MSEn. Furthermore, assuming that there exist constants c1 and c2 so that c1/nan ≤ hn ≤ c2/nan , then c−12 (1− c2/nan) MSEn ≤ E [ e2n/nan ] ≤ c−11 (1− c1/nan) MSEn. Proposition 4 provides a lower and an upper bound for E [ e2n/nan ] in terms of MSEn, meaning that on average SAU is a conservative measure of the uncertainty around x>n θ̂n,an . Noting that 0 ≤ hj ≤ 1 and ∑ j∈Tn,an hj = p, the assumption that c1/nan ≤ hn ≤ c2/nan requires that hn does not dominate or is dominated by other terms hj , with j ∈ Tn,an , meaning that contexts should be “homogeneous” to a certain extent. To examine the robustness to violations of this assumption, in the simulation in Appendix B we empirically test the performance under a heavy-tailed t-distribution with df = 2. The results show that SAU works robustly even under such type of context inhomogeneity. 4.3 SAU in Linear Contextual Bandits: Empirical evaluation on synthetic data In this section, we present simulation results quantifying the performance of our SAU-based exploration algorithms in linear contextual bandits. We evaluate SAU on synthetically generated datasets to address two questions: (1) How does SAU’s performance compare against Thompson Sampling?, and (2) How robust is SAU in various parameter regimes? We consider three scenarios for K (the number of actions) and p (the context dimensionality): (a) K = 5, p = 5, (b) K = 20, p = 5, and (b) K = 5, p = 40. The horizon is N = 20000 steps. For each action a, parameters θa are drawn from a uniform distribution in [−1, 1], then normalized so that ‖θa‖ = 1. Next, at each step n context xn is sampled from a Gaussian distribution N (0p, Ip). Finally, we set the noise variance to be σ2 = 0.52 so that the signal-to-noise ratio equals 4. We compare our SAU-based exploration algorithms, SAU-UCB and SAU-Sampling to Thompson Sampling (“TS” in Fig. 1). For TS on linear model, we follow [18] and use Bayesian linear regression for exact posterior inference. We also consider the PrecisionDiag approximation for the posterior covariance matrix of θa with the same priors as in [18] (“TSdiag” in Fig. 1). Fig. 1a) shows regret as a function of step for (K, p) = (5, 5). From the figure we have two observations: SAU-Sampling is comparable to TS, and SAU-UCB achieves better regret than TS. In a) (K, p)=(5, 5) b) (K, p)=(20, 5) c) (K, p)=(5, 40) terms of cumulative regret SAU significantly outperforms TS and TSdiag. Figures 1b) and c) show the effects of larger K and p, respectively. The observations from Fig. 1a) still hold in these cases, implying that SAU’s performance is robust to an increase in action space and context dimension. We also consider four other cases: (1) the elements of θa are sampled from N (0, 1) then are normalized; (2) the model errors are correlated with AR(1) covariance structure with correlation ρ = 0.5; (3) the elements in xi are correlated with AR(1) covariance structure with correlation ρ = 0.5; and (4) the elements of xi are sampled from a heavy-tailed t-distribution with df = 2 and are truncated at 5. These results are shown in Appendix B and are consistent with the results in Fig. 1 confirming SAU’s robustness to various contextual linear bandit problems. 5 Deep Contextual Bandits 5.1 Deep Bayesian Bandit Algorithms Deep contextual bandits refers to tackling contextual bandits by parameterizing the action-value function as a deep neural network µ(x,θ), thereby leveraging models that have been very successful in the large-scale supervised learning [20] and RL [17]. Notice that in the deep setting we denote all parameters with θ = {θa}a∈K, as common in the neural network literature. In particular, θ includes the parameters that are shared across actions, as well as those of the last layer of the network which are specific to each action a. Algorithm 2 breaks down a generic deep contextual bandit algorithm in terms of an API exposing its basic subroutines: PREDICT (which outputs the set of action-values {µn,a}a∈K given the observation xn), ACTION (which selects an action given all the action-values), and UPDATE (which updates model parameters at the and of the step). In this scheme Thompson Sampling (TS) is implemented as in Algorithm 3, which underlines where TS promotes exploration by sampling from a distribution over model parameters Pn(θ). In principle this provides an elegant Bayesian approach to tackle the exploration-exploitation dilemma embodied Algorithm 2 Generic Deep Contextual Bandit algorithm 1: for n = 1, 2, . . . do 2: Observe context xn; 3: Compute values {µn,a}a∈K = PREDICT(xn); 4: Choose an = ACTION({µn,a}a∈K), observe reward rn; 5: UPDATE (rn, an,xn); 6: end for by contextual bandits. Unfortunately, representing and updating a posterior distribution over model parameters Pn(θ) exactly becomes intractable for complex models such as deep neural networks. Algorithm 3 Thompson Sampling for Deep Contextual Bandits 1: function PREDICT(xn) 2: Exploration: Sample model parameters from posterior distribution: θ̂n ∼ Pn(θ); 3: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n), where 4: function ACTION({µ̂n,a}a∈K) 5: Return an = arg maxa({µ̃n,a}a∈K); 6: function UPDATE(rn, an,xn) 7: Use triplet (rn, an,xn) to update posterior distribution: Pn+1(θ)← Pn(θ); To obviate this problem, several techniques that heuristically approximate posterior sampling have emerged, such as randomly perturbing network parameters [21–23], or bootstrapped sampling [24]. Within the scheme of Algorithm 2 the role of random perturbation and bootstrapped sampling are to heuristically emulate the model sampling procedure promoting exploration in the PREDICT subroutine (see TS Algorithm 3). However, systematic empirical comparisons recently demonstrated that simple strategies such as epsilon-greedy [17, 25] and Bayesian linear regression [26] remain very competitive compared to these approximate posterior sampling methods in deep contextual bandit. In particular, [18] showed that linear models where the posterior can be computed exactly, and epsilon-greedy action selection overwhelmingly outrank deep methods with approximate posterior sampling in a suite of contextual bandit benchmarks based on real-world data. 5.2 SAU for Deep Contextual Bandits We now re-examine the deep contextual bandits benchmarks in [18] and show that SAU can be seamlessly combined with deep neural networks, resulting in an exploration strategy whose performance is competitive with the best deep contextual bandit algorithms identified by [18]. Algorithm 4 shows the deep contextual bandit implementation of SAU. Notice that the PREDICT subroutine is remarkably simple, consisting merely in the forward step of the deep neural network value prediction model. In contrast to our extremely simple procedure, TS-based methods require at this step to (approximately) sample from the model posterior to implement exploration. In SAU exploration is instead taken care of by the ACTION subroutine, which takes the values as inputs and either explores through sampling from a distribution around the predicted values (SAU-Sampling) or through an exploration bonus added to them (SAU-UCB). SAU then selects the action corresponding to the maximum of these perturbed values. The UPDATE for SAU is also quite simple, and consists in updating the neural network parameters to minimize the reward prediction error loss ln following action selection using SGD via backprop, or possibly its mini-batch version (which would then be carried out on a batch of (rn, an,xn) triplets previously stored in a memory buffer). UPDATE then updates the count and the SAU measure τan for the selected action an. We notice that the simplicity of SAU for deep contextual bandits is akin to the simplicity of epsilongreedy, for which exploration is also implemented in the ACTION subroutine (see Algorithms 5 in Appendix E). In fact, comparing the two algorithms it is clear that SAU can be used as a drop-in replacement for epsilon-greedy exploration, making it widely applicable. Algorithm 4 SAU for Deep Contextual Bandits (SAU-Neural-Sampling and UCB) 1: function PREDICT(xn) 2: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n); 3: function ACTION({µ̂n,a}a∈K) 4: Exploration: Compute µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) (SAU-Sampling) 5: or µ̃n,a = µ̂n,a + √ τa log n/na (SAU-UCB); 6: Return an = arg maxa({µ̃n,a}a∈K); 7: function UPDATE(rn, an,xn) 8: Compute prediction error en = rn − µ̂n,an and loss ln = 12 (rn − µ̂n,an) 2 9: Update model parameters to θ̂n+1 using SGD with gradients ∂ln∂θ (or mini-batch version); 10: Update exploration parameters: nan ← nan + 1, S2an ← S 2 an + e 2 n τ 2 an = S 2 an/nan ; 5.3 Empirical Evaluation of SAU on Deep Contextual Bandit Problems Benchmarks and baseline algorithms. Our empirical evaluation of SAU’s performance in the deep contextual bandit setting is based on the experiments by [18], who benchmarked the main TS-based approximate posterior sampling methods over a series of contextual bandit problems. We test SAU on the same contextual bandit problems against 4 competing algorithms consisting in the 4 best ranking algorithms identified by [18], which are: LinearPosterior (a closed-form Bayesian linear regression algorithm for exact posterior inference under the assumption of a linear contextual bandit [27]), LinearGreedy (epsilon-greedy exploration under the assumption of a linear contextual bandit), NeuralLinear (Bayesian linear regression on top of the last layer of a neural network trained with SGD [28]) and NeuralGreedy (a neural network with epsilon-greedy exploration trained with SGD). We neglected a direct comparison with NeuralUCB [29], since its scaling in memory and computational requirements make it quickly impractical for even moderately sized applications of practical interest. Moreover, its reported performance is substantially worse than SAU-UCB. Implementations of SAU. We implemented and tested 4 versions of SAU on the benchmarks in [18]. In the Tables below we refer to them a follows: Linear-SAU-S and Linear-SAU-UCB refer to a linear regression model using SAU-Sampling and SAU-UCB as exploration strategies, respectively. NeuralSAU-S and Neural-SAU-UCB refer to a neural network model trained with SGD using SAU-Sampling and SAU-UCB, respectively. Empirical evaluation on the Wheel Bandit. The Wheel Bandit Problem is a synthetic bandit designed by [18] to study the performance of bandit algorithms as a function of the need for exploration in the environment by varying a parameter δ ∈ [0, 1] that smoothly changes the importance of exploration. In particular, the difficulty of the problem increases with δ, since the problem is designed so that for δ close to 1 most contexts have the same optimal action, while only for a fraction 1− δ2 of contexts the optimal action is a different more rewarding action (see [18] for more details). In Appendix C, Table 2 quantifies the performance of SAU-Sampling and SAU-UCB in terms of cumulative regret in comparison to the 4 competing algorithms, and normalized to the performance of the Uniform baseline, which selects actions uniformly at random. There we can see that Neural-SAU-S is consistently the best algorithm with lower cumulative regret for a wide rage of the parameter δ. Only for very high values of δ (δ = 0.99) the baseline algorithm NeuralLiner starts to overtake it, but even in this case, another variant of SAU, SAU-Linear-S still maintains the lead in performance. Empirical evaluation on real-world Deep Contextual Bandit problems. Table 1 quantifies the performance of SAU-Sampling and SAU-UCB in comparison to the 4 competing baseline algorithms, and normalized to the performance of the Uniform baseline. These results show that a SAU algorithm is the best algorithm in each of the 7 benchmarks in terms of minimizing cumulative regret over all samples. Neural-SAU-S or Neural-SAU-UCB are the best combination 6 out of 7 times, and linear regression with SAU-UCB is the best on the bandit built from the Adult dataset. The next best algorithm in terms of minimizing cumulative regret is NeuralLinear [18], which incurs cumulative regret that on average is 32% higher than Neural-SAU-S and 34% higher than Neural-SAU-UCB. As already mentioned, thanks to their implementation efficiency SAU-based algorithms are much less computation intensive than TS-based algorithms. This is reflected in the remarkably shorter execution time: on average Neural-SAU-S and Neural-SAU-UCB run more than 10 time faster than NeuralLinear [18] (see Appendix Table 5 for details), also making them extremely scalable. 6 Conclusion and Discussion Existing methods to estimate uncertainty tend to be impractical for complex value function models like deep neural networks, either because exact posterior estimation become unfeasible, or due to how approximate algorithms coupled with deep learning training amplify estimation errors. In this paper, we have introduced Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandit problems which sidesteps the mentioned problems plaguing Bayesian posterior methods. SAU only depends on the value prediction, in contrast to methods based on Thompson Sampling that instead require an estimate of the variability of the model parameters. As a result, SAU is immune to the negative effects that neural network parameterizations and optimization have on the quality of uncertainty estimation, resulting in reliable and robust exploration as demonstrated by our empirical studies. SAU’s implementation simplicity also makes it suitable as a drop-in replacement for epsilon-greedy action selection, resulting in a scalable exploration strategy that can be effortlessly deployed in large-scale and online contextual bandit scenarios. We also have provided theoretical justifications for SAU-based exploration by connecting SAU with posterior variance and mean-squared error estimation. However, the reasons why SAU is in practice consistently better than TS-based exploration in deep bandits is still not settled theoretically. We hypothesize that this might be due to two main reasons: (1) TS-based methods implement exploration by estimating the uncertainty of the internal model parameters, which might introduce estimation errors, while SAU directly estimates uncertainty at the model output; (2) in addition, the approximation error from the approximate posterior implementations of TS-based models might result in inefficient uncertainty measures of the internal model parameters. Because of the importance of contextual bandit algorithms for practical applications like for instance recommendation and ad servicing systems, we believe that it will be important to further theoretically refine these hypotheses to help mitigate the possible negative societal impacts that could result from deploying inefficient, miscalibrated or biased exploration algorithms. Another limitation of our work is that it developed SAU-based exploration in the specific and restricted case of bandits. Despite being an application of interest, we are excited and looking forward to further development that could extend methods based on SAU to more general sequential decision scenarios in RL beyond the bandit setting. Acknowledgements This work was partially supported by National Natural Science Foundation of China (No.11871459) and by Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01).
1. What is the focus and contribution of the paper on contextual bandits? 2. What are the strengths and weaknesses of the proposed algorithm, particularly regarding its novelty and connection to past works? 3. Do you have any concerns about the design choices made in the algorithm, such as using outdated estimates, and how they might impact performance in practical applications? 4. Are there any questions regarding the theoretical results provided for Bernoulli and linear bandits? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper The paper proposes an algorithm for contextual bandits based on function approximation using deep neural networks. In order to trade-off exploration and exploitation, the proposed algorithm estimates the variance of the sample mean for each action's rewards, and uses a UCB-like approach to select promising actions. Theoretical results for Bernoulli and linear bandits are provided. Empirical experiments show that the proposed algorithm outperforms standard alternatives in popular benchmark datasets. Review I'll list strong (+) and weak (-) points, together with questions (?). (+) The algorithm is conceptually simple, computationally cheap, and easy to implement. (+) Reproducibility code made available. (-) My main concern is novelty. If I'm not missing something, the idea is to instantiate the standard UCB algorithm with NN representations (see, say, eq 1 in [1]). For each action, we estimate its mean value for the current context, and add a "bonus" that depends on the action but not on the current context, as opposed to relying on uncertainty measures that also depend on the context (for example, see eq 5 in [2]). I don't think this is highlighted enough in the paper. How much better can we do by also using the specific context at hand to quantify reward uncertainty? (-) Literature review and connection with past work is completely missing. For example, how does the proposed approach compare to "Neural Contextual Bandits with UCB-based Exploration" by Zhou, Li and Gu? (?/-) Why not use \theta_t in (3) for all observations (t = current step)? From a statistical perspective, I don't see any reason to use out-dated estimates. I assume this is for computational reasons, as it's cheaper to not re-evaluate the residuals for the uncertainty estimate at every step (you could do it every M steps though, otherwise your initial residuals may not be accurate at some point). What's the regret price of not updating \theta? I think comparing these two approaches in the empirical results is needed to justify this design choice (i.e., if the difference in performance is tiny but the computational savings are huge, then it should be fine). For practical applications where the environment may not even be stationary (say ads), this could become a big issue, but updating residuals every now and then shouldn't be a big deal. (?) Aren't we missing a key UCB paramater (\alpha) in (7), which controls the width of the interval (and has been observed to have a strong impact on UCB performance)? Was this optimized for the experiments? (?) If the main hypothesis to test here is that a simple context-independent variance estimate is enough to explore, why not compare with a simple NN algorithm that outputs not only the mean reward for a context, but some sort of context-dependent variance estimate? This could be optimized by maximizing likelihood at the observed r_i with respect to a Gaussian distribution N(f_mu(x_i), f_sigma2(x_i)). Exploration then can use this uncertainty in the same way you propose. I'm not saying this will work better, but may shed some light on the usefulness or lack-of of context-dependent exploration. [1] - Use of variance estimation in the multi-armed bandit problem (Audibert, Munos and Szepesvari). [2] - A Contextual-Bandit Approach to Personalized News Article Recommendation (Li, Chu, Langford, Schapire). Typos: Line 4: "the the" Line 132: In algorithm 1, \tau is missing the square in SAU-UCB, step 6.
NIPS
Title Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks Abstract Designing efficient exploration is central to Reinforcement Learning due to the fundamental problem posed by the exploration-exploitation dilemma. Bayesian exploration strategies like Thompson Sampling resolve this trade-off in a principled way by modeling and updating the distribution of the parameters of the action-value function, the outcome model of the environment. However, this technique becomes infeasible for complex environments due to the computational intractability of maintaining probability distributions over parameters of outcome models of corresponding complexity. Moreover, the approximation techniques introduced to mitigate this issue typically result in poor exploration-exploitation trade-offs, as observed in the case of deep neural network models with approximate posterior methods that have been shown to underperform in the deep bandit scenario. In this paper we introduce Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandits. While Bayesian approaches like Thompson Sampling estimate outcomes uncertainty indirectly by first quantifying the variability over the parameters of the outcome model, SAU is a frequentist approach that directly estimates the uncertainty of the outcomes based on the value predictions. Importantly, we show theoretically that the uncertainty measure estimated by SAU asymptotically matches the uncertainty provided by Thompson Sampling, as well as its regret bounds. Because of its simplicity SAU can be seamlessly applied to deep contextual bandits as a very scalable drop-in replacement for epsilongreedy exploration. We confirm empirically our theory by showing that SAU-based exploration outperforms current state-of-the-art deep Bayesian bandit methods on several real-world datasets at modest computation cost, and make the code to reproduce our results available at https://github.com/ibm/sau-explore. 1 Introduction The exploration-exploitation dilemma is a fundamental problem in models of decision making under uncertainty in various areas of statistics, economics, machine learning, game theory, adaptive control and management. Given a set of actions associated with unknown probabilistic rewards, an agent has to decide whether to exploit familiar actions to maximizing immediate reward or to explore poorly understood or unknown actions for potentially finding ways to improve future rewards. ∗Corresponding author 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Quantifying the uncertainty associated with the value of each action is a key component of conventional algorithms for addressing the exploration-exploitation dilemma. In particular, it is central to the two most successful exploration strategies commonly adopted in bandit settings: Upper Confidence Bound (UCB) and Thompson Sampling. The UCB algorithm [1–11] follows the principle of optimism in the face of uncertainty, which promotes exploration by maintaining confidence sets for action-value estimates and then choosing actions optimistically within these confidence sets. Thompson Sampling (TS), introduced by [12] and successfully applied in a wide range of settings [13–16], is based on the principle of sampling in the face of uncertainty, meaning that it samples actions from the posterior distribution over action-values given past rewards. In modern reinforcement learning (RL), the flexible generalization capabilities of neural networks brought about by Deep RL have proven successful in tackling complex environments by learning mappings from high-dimensional observations directly to value estimates [17]. However, obtaining uncertainty measures over complex value functions like neural network models becomes challenging because of the intractability of estimating and updating posteriors over their parameters, limiting the applicability of Bayesian exploration strategies like UCB and TS. Recently, several proposals to address this challenge have been put forth that rely on approximations of the posterior over value functions. Unfortunately, these methods tend to underperform empirically compared to much simpler heuristics. For instance, [18] showed that in contextual bandit tasks the main approximate Bayesian posterior methods for deep neural networks are consistently beaten by simple baselines such as combining neural network value functions with a basic exploration strategy like epsilon-greedy, or using simple action-values like linear regression where the exact posterior can be computed. In this paper we propose a novel uncertainty measure which departs from the Bayesian approach of estimating the uncertainty over the parameters of the value prediction model. Our uncertainty measure, which we call Sample Average Uncertainty (SAU) is a frequentist quantity that only depends on the value prediction of each action. In particular, unlike UCB and TS, exploration based on SAU does not require the costly computation of a posterior distribution over models in order to estimate uncertainty of their predictions. In fact, instead of first estimating the uncertainty over the parameters of the value function to then use it to quantify the uncertainty over outcomes, SAU directly estimates uncertainty over outcomes by measuring the variance of sample averages. This result is then plugged into the current estimate of the outcome model. With our new measure of uncertainty of the expected action-values, we build two SAU-based exploration strategies: one based on the principle of “optimism in the face of SAU” that we name SAU-UCB, and a second one based on “sampling in the face of SAU” that we name SAU-Sampling. We investigate the use of these new exploration strategies to tackle contextual bandit problems, and show that SAU is closely related to the mean-squared error in contextual bandits. This allows us to show analytically that in the case of Bernoulli multi-armed bandits the SAU measure converges to the uncertainty of the action-value estimates that are obtained by TS, despite SAU being much simpler to compute and not needing to rely on maintaining the posterior distribution. In addition, we derive an upper bound on the expected regret incurred by our SAU algorithms in multi-armed bandits that shows that they achieve the optimal logarithmic regret. Finally, we empirically study the deployment of SAU-UCB and SAU-Sampling in the deep bandit setting and use them as exploration strategy for deep neural network value function models. Concretely, we follow the study of [18] and show that SAU consistently outranks the deep Bayesian bandit algorithms that they analyzed on the benchmarks that they proposed. 2 Problem Formulation: Contextual Bandits The contextual bandit problem is a paradigmatic model for the study of the exploration-exploitation trade-off and is formulated as follows. At each time step n we observe a context xn, select an action an from a set K = {1, . . . ,K}, after which we receive a reward rn. The value of an action a (in context xn ∈ Rp) is defined as the expected reward given that a is selected: E[rn|an = a] = µ(xn,θa), (1) where in general the action-values µ(·) depend on unknown parameters θa ∈ Rp. Our goal is to design a sequential decision-making policy π that over time learns the action parameters θa which maximize the expected reward. This goal is readily quantified in terms of minimizing expected regret, where we say that at step n we incur expected regret max a′∈K {µ(xn,θa′)} − µ(xn,θan), (2) i.e. the difference between the reward received by playing the optimal action and the one following the chosen action an. One way to design a sequential decision-making policy π that minimizes expected regret is to quantify the uncertainty around the current estimate of the unknown parameters θa. TS for instance does this by sequentially updating the posterior of θa after each action and reward. This paper presents a novel and simpler alternative method to estimate uncertainty. 3 Exploration based on Sample Average Uncertainty 3.1 Sample Average Uncertainty (SAU) In this section, we begin with introducing our novel measure of uncertainty SAU. Let Ta denote the set of time steps when action a was chosen so far, and let na be the size of this set. Based on the na rewards {rn}n∈Ta obtained with action a, the sample mean reward given action a is: r̄a = n −1 a ∑ n∈Ta rn. At this point we reiterate that exploitation and exploration are customarily traded off against each other with a Bayesian approach that estimates the uncertainty of the action-values on the basis of a posterior distribution over their parameters given past rewards. Instead, we propose a frequentist approach that directly measures the uncertainty of the sample average rewards that was just computed. Direct calculation using eq. (1) then gives us that the variance of the sample mean reward is Var(r̄a) = σ̄2a/na, where σ̄ 2 a = n −1 a ∑ n∈Ta σ2n,a with σ 2 n,a = E [ (rn − µ(xn,θa))2 ] . Assuming that there is a sequence of estimators {θ̂n,a}n∈Ta of θa, we can replace θa with θ̂n,a at each n ∈ Ta to approximate σ̄2a with a convenient statistics τ2a defined as τ2a = n −1 a ∑ n∈Ta ( rn − µ(xn, θ̂n,a) )2 . (3) With this we get an approximate sample mean variance of V̂ar(r̄a) = τ2a/na. (4) The central proposal of this paper is to use V̂ar(r̄a) as a measure of the uncertainty of the decision sequence. We call this quantity Sample Average Uncertainty (SAU), since it measures directly the uncertainty of sample mean rewards r̄a. In practice, τ2a can be updated incrementally as follows: 1. Compute the prediction residual: en = rn − µ(xn, θ̂n,an); (5) 2. Update Sample Average Uncertainty (SAU): τ2an ← τ 2 an + n −1 an [ e2n − τ2an ] . (6) Let us take a moment to contrast the uncertainty measure given by SAU and existing exploration algorithms like TS, which as we said would estimate the uncertainty of the action-value function µ(·) by maintaining and updating a distribution over its parameters θa. SAU instead directly quantifies the uncertainty associated with each action by measuring the uncertainty of the sample average rewards. The clear advantage of SAU is that it is simple and efficient to compute: all it requires are the prediction residuals rn − µ(xn, θ̂n,an) without any need to model or access the uncertainty of µ(xn, θ̂n,a). Because of the simplicity of its implementation, SAU can be naturally adapted to arbitrary action-value functions. In particular, it can be used to implement an exploration strategy for action-value function parameterized as deep neural networks or other model classes for which TS would be infeasible because of the intractability of computing a probability distribution over models. Note that in updating τ2a we use the residuals obtained at each step rather than re-evaluating them using later estimates. This is a design choice motivated by the goal of minimizing the computation cost and implementation efficiency of SAU. Moreover, this choice can be justified from the viewpoint of the statistical efficiency, since, as the number of training samples increases, the impact of initial residuals will decrease, so that the benefit of re-evaluating them incurs diminishing returns. Proposition 3 formalizes this argument by showing that indeed τ2a as computed in eq. (6) is concentrated around its expectation. In addition, perhaps as importantly, the aim of SAU is to provide a quantity to support exploration. The effect of potentially inaccurate residuals in the initial steps may actually be beneficial due to the introduction of additional noise driving initial exploration. This might be in part at the root of the good empirical results. 3.2 SAU-based Exploration in Bandit Problems We now use the SAU measure to implement exploration strategies for (contextual) bandit problems. SAU-UCB. UCB is a common way to perform exploration. Central to UCB is the specification of an “exploration bonus” which is typically chosen to be proportional to the measure of uncertainty. Accordingly, we propose to use the SAU measure τ2a as exploration bonus. Specifically, given value predictions µ̂n,a = µ(xn, θ̂n,a) for each a at step n, we modify the values as µ̃n,a = µ̂n,a + √ n−1a τ2a log n, (7) then choose the action by an = arg maxa({µ̃n,a}a∈K). We call this implementation of UCB using SAU as exploration bonus: SAU-UCB. SAU-Sampling. “Sampling in the face of uncertainty” is an alternative exploration principle that we propose to implement with SAU in addition to UCB. This is inspired by TS which samples the success probability estimate µ̂a from its posterior distribution. Analogously, we propose to sample values from a parametric Gaussian distribution with a mean given by the value prediction and a variance given by σ̄2a. This results in sampling values µ̃n,a at each time n as: µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) , (8) then choosing the action by an = arg maxa({µ̃n,a}a∈K). We call this use of SAU inspired by TS, SAU-Sampling. SAU-UCB and SAU-Sampling are summarized in Algorithm 1. Algorithm 1 SAU-UCB and SAU-Sampling for bandit problems 1: Initialize: θ̂a, S2a = 1 and na = 0 for a ∈ K. 2: for n = 1, 2, . . . do 3: Observe context xn; 4: for a = 1, . . . ,K do 5: Calculate the prediction µ̂n,a = µ(xn; θ̂a) and τ2a = S 2 a/na; 6: Draw a sample µ̃n,a = µ̂n,a + √ τ2an −1 a log n (SAU-UCB) or µ̃n,a ∼ N ( µ̂n,a, n −1 a τ 2 a ) (SAU-Sampling); 7: end for 8: Compute an = arg maxa({µ̃n,a}a∈K) if n > K, otherwise an = n; 9: Select action an, observe reward rn; 10: Update θ̂an and increment nan ← nan + 1; 11: Update S2an ← S 2 an + e 2 n using prediction error calculated as en = rn − µ̂n,an ; 12: end for 3.3 Novelty and comparison with related approaches Using the variance estimation in MAB is not novel. For example [19] makes use of Bernstein’s inequality to refine confidence intervals by additionally considering the uncertainty from estimating variance of reward noise. Our approach is fundamentally different from it with two aspects. First, Algorithm 1 is to propose a novel measure to approximate the uncertainty of the estimate of the mean reward that would afford such a flexible implementation and can therefore directly extended and scaled up to complicated value models like deep neural networks. Second, our SAU quantity τ2 is the per-step squared prediction error, i.e., the average cumulative squared prediction error, as opposed to an estimate of the variance of the different arms. In fact, τ2 does not rely on the traditional variance estimation analyzed by[19], but is instead simply computed directly from the prediction. This difference makes SAU even easier to implement and adapt to settings like deep networks. The exploration bonus in Algorithm 1 is not a function of the observed context, though it is updated from historical observations of the context. The algorithm could indeed be extended to provide a quantification of reward uncertainty that is a function of the current context by, for instance, fitting the SAU quantity as a function of context. Clearly, this will come at the cost of substantially increasing the complexity of the algorithm. Therefore to avoid this additional complexity, we instead focus the paper on the development of the SAU quantity as a simple estimate of uncertainty to efficiently drive exploration. However, exploring this possibility is a potentially exciting direction for future work. 4 SAU in Multi-Armed Bandits 4.1 SAU Approximates Mean-squared Error and TS in Multi-armed Bandits Before considering the contextual bandits scenario, we analyze the measure of uncertainty provided by SAU in multi-armed bandits, and compare it to the uncertainty computed by TS. This will help motivate SAU and elucidate its functioning. We assume a multi-armed Bernoulli bandit, i.e. at each step n each action a ∈ K results in a reward sampled from rn ∼ Bernoulli(µa) with fixed (unknown) means µa ∈ [0, 1]. Assume that action a has been taken na times so far, and let µ̂a denote the sample averages of the rewards for each action. The prediction residual eq. (5) is en = rn − µ̂an and is the central quantity to compute SAU. TS in the case of Bernoulli bandits is typically applied by assuming that the prior follows a Beta distribution, i.e. the values are sampled from Beta(αa, βa) with parameters αa and βa for a ∈ K. Uncertainty around the estimated mean values are then quantified by its variance denoted by V̂a (see Appendix A.1). We then have the following proposition relating SAU and TS in Bernoulli bandits: Proposition 1 For Beta Bernoulli bandits the expectation of the average prediction residual e2n/nan is an approximate unbiased estimator of the expectation of the posterior variance V̂a in TS. Concretely: E[V̂an ] = E[e2n/nan ] +O ( n−2an ) . Proof Proof of Proposition 1 is provided in Appendix A.1. Proposition 1 says that SAU asymptotically approximates TS for Bernoulli bandits, despite not needing to assume a prior and update a posterior distribution over parameters. In Appendix A.3 we support this empirically by showing that in multi-armed bandits SAU rivals TS. The following proposition further characterizes the prediction residual: Proposition 2 For Bernoulli bandits the expectation of the prediction residual used in SAU satisfies E[e2n/nan ] = E[(rn − µ̂an)2/nan ] = E [ (µ̂an − µan)2 ] +O ( n−2an ) . Proof Proof of Proposition 2 is provided in Appendix A.2. Proposition 2 says that the prediction residual en = rn − µ̂an is an approximately unbiased estimator of the mean squared error E [ (µ̂an − µan)2 ] . This means that for Bernoulli bandits, SAU closely approximates the uncertainty of the action-value estimates. Armed with this characterization of the prediction residual rn−µ̂an in Proposition 2, we now quantify the performance of the estimator τ2a in eq. (3) in terms of its concentration around its expectation: Proposition 3 For δ ∈ [ 2 exp ( −σ2ana/(32c) ) , 1 ) , where σ2a is the variance of rj for j ∈ Ta and c a constant, we have Pr {∣∣τ2a − E [τ2a ]∣∣ ≥ σa√8c/(na log(δ/2))} ≤δ, Proof Proof of Proposition 3 is provided in Appendix A.4. Proposition 3 says that τ2a is concentrated around its expectation, and thus remains stable as it is being updated. In Appendix A.6 we also show that E [ τ2a ] → σ2a as na →∞, and in Appendix A.7 we derive an upper bound on the expected regrets of SAU-UCB and SAU-Sampling in multi-armed bandits proving that the optimal logarithmic regrets are achievable uniformly over time, which says that the theoretical performance of SAU rivals TS in multi-armed bandits. 4.2 SAU in Linear Contextual Bandits: Theoretical analysis We now show that the results in Proposition 2 also hold for another important bandit model beside Bernoulli bandits, i.e. linear contextual bandits defined by the following outcome model: rn = x > n θa + n,a, n = 1, 2, . . . , (9) where xn,θa ∈ Rp, and n,a are iid random variables with variance σ2a. Assume action awas selected na times. We obtain the least-squares estimator θ̂n,an = ( ∑ j∈Tn,an x>j xj) −1( ∑ j∈Tn,an x>j rj). Accordingly, the prediction and the prediction residual at step n are, respectively, µ̂n,an = x > n θ̂n,an and e 2 n = (rn − x>n θ̂n,an)2. (10) Denote hn = x>n ( ∑ j∈Tn,an x>j xj) −1xn. The mean squared error of x>n θ̂n,an is MSEn = E[(x>n θ̂n,an−x>n θan)2]. With direct calculation we see that MSEn = hnσ2an and that E [ e2n/nan ] = (1− hn)σ2an/nan . Therefore, we have the following proposition: Proposition 4 For linear contextual bandits (9) we have that E[e2n/nan ] = (hnnan)−1(1− hn) MSEn. Furthermore, assuming that there exist constants c1 and c2 so that c1/nan ≤ hn ≤ c2/nan , then c−12 (1− c2/nan) MSEn ≤ E [ e2n/nan ] ≤ c−11 (1− c1/nan) MSEn. Proposition 4 provides a lower and an upper bound for E [ e2n/nan ] in terms of MSEn, meaning that on average SAU is a conservative measure of the uncertainty around x>n θ̂n,an . Noting that 0 ≤ hj ≤ 1 and ∑ j∈Tn,an hj = p, the assumption that c1/nan ≤ hn ≤ c2/nan requires that hn does not dominate or is dominated by other terms hj , with j ∈ Tn,an , meaning that contexts should be “homogeneous” to a certain extent. To examine the robustness to violations of this assumption, in the simulation in Appendix B we empirically test the performance under a heavy-tailed t-distribution with df = 2. The results show that SAU works robustly even under such type of context inhomogeneity. 4.3 SAU in Linear Contextual Bandits: Empirical evaluation on synthetic data In this section, we present simulation results quantifying the performance of our SAU-based exploration algorithms in linear contextual bandits. We evaluate SAU on synthetically generated datasets to address two questions: (1) How does SAU’s performance compare against Thompson Sampling?, and (2) How robust is SAU in various parameter regimes? We consider three scenarios for K (the number of actions) and p (the context dimensionality): (a) K = 5, p = 5, (b) K = 20, p = 5, and (b) K = 5, p = 40. The horizon is N = 20000 steps. For each action a, parameters θa are drawn from a uniform distribution in [−1, 1], then normalized so that ‖θa‖ = 1. Next, at each step n context xn is sampled from a Gaussian distribution N (0p, Ip). Finally, we set the noise variance to be σ2 = 0.52 so that the signal-to-noise ratio equals 4. We compare our SAU-based exploration algorithms, SAU-UCB and SAU-Sampling to Thompson Sampling (“TS” in Fig. 1). For TS on linear model, we follow [18] and use Bayesian linear regression for exact posterior inference. We also consider the PrecisionDiag approximation for the posterior covariance matrix of θa with the same priors as in [18] (“TSdiag” in Fig. 1). Fig. 1a) shows regret as a function of step for (K, p) = (5, 5). From the figure we have two observations: SAU-Sampling is comparable to TS, and SAU-UCB achieves better regret than TS. In a) (K, p)=(5, 5) b) (K, p)=(20, 5) c) (K, p)=(5, 40) terms of cumulative regret SAU significantly outperforms TS and TSdiag. Figures 1b) and c) show the effects of larger K and p, respectively. The observations from Fig. 1a) still hold in these cases, implying that SAU’s performance is robust to an increase in action space and context dimension. We also consider four other cases: (1) the elements of θa are sampled from N (0, 1) then are normalized; (2) the model errors are correlated with AR(1) covariance structure with correlation ρ = 0.5; (3) the elements in xi are correlated with AR(1) covariance structure with correlation ρ = 0.5; and (4) the elements of xi are sampled from a heavy-tailed t-distribution with df = 2 and are truncated at 5. These results are shown in Appendix B and are consistent with the results in Fig. 1 confirming SAU’s robustness to various contextual linear bandit problems. 5 Deep Contextual Bandits 5.1 Deep Bayesian Bandit Algorithms Deep contextual bandits refers to tackling contextual bandits by parameterizing the action-value function as a deep neural network µ(x,θ), thereby leveraging models that have been very successful in the large-scale supervised learning [20] and RL [17]. Notice that in the deep setting we denote all parameters with θ = {θa}a∈K, as common in the neural network literature. In particular, θ includes the parameters that are shared across actions, as well as those of the last layer of the network which are specific to each action a. Algorithm 2 breaks down a generic deep contextual bandit algorithm in terms of an API exposing its basic subroutines: PREDICT (which outputs the set of action-values {µn,a}a∈K given the observation xn), ACTION (which selects an action given all the action-values), and UPDATE (which updates model parameters at the and of the step). In this scheme Thompson Sampling (TS) is implemented as in Algorithm 3, which underlines where TS promotes exploration by sampling from a distribution over model parameters Pn(θ). In principle this provides an elegant Bayesian approach to tackle the exploration-exploitation dilemma embodied Algorithm 2 Generic Deep Contextual Bandit algorithm 1: for n = 1, 2, . . . do 2: Observe context xn; 3: Compute values {µn,a}a∈K = PREDICT(xn); 4: Choose an = ACTION({µn,a}a∈K), observe reward rn; 5: UPDATE (rn, an,xn); 6: end for by contextual bandits. Unfortunately, representing and updating a posterior distribution over model parameters Pn(θ) exactly becomes intractable for complex models such as deep neural networks. Algorithm 3 Thompson Sampling for Deep Contextual Bandits 1: function PREDICT(xn) 2: Exploration: Sample model parameters from posterior distribution: θ̂n ∼ Pn(θ); 3: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n), where 4: function ACTION({µ̂n,a}a∈K) 5: Return an = arg maxa({µ̃n,a}a∈K); 6: function UPDATE(rn, an,xn) 7: Use triplet (rn, an,xn) to update posterior distribution: Pn+1(θ)← Pn(θ); To obviate this problem, several techniques that heuristically approximate posterior sampling have emerged, such as randomly perturbing network parameters [21–23], or bootstrapped sampling [24]. Within the scheme of Algorithm 2 the role of random perturbation and bootstrapped sampling are to heuristically emulate the model sampling procedure promoting exploration in the PREDICT subroutine (see TS Algorithm 3). However, systematic empirical comparisons recently demonstrated that simple strategies such as epsilon-greedy [17, 25] and Bayesian linear regression [26] remain very competitive compared to these approximate posterior sampling methods in deep contextual bandit. In particular, [18] showed that linear models where the posterior can be computed exactly, and epsilon-greedy action selection overwhelmingly outrank deep methods with approximate posterior sampling in a suite of contextual bandit benchmarks based on real-world data. 5.2 SAU for Deep Contextual Bandits We now re-examine the deep contextual bandits benchmarks in [18] and show that SAU can be seamlessly combined with deep neural networks, resulting in an exploration strategy whose performance is competitive with the best deep contextual bandit algorithms identified by [18]. Algorithm 4 shows the deep contextual bandit implementation of SAU. Notice that the PREDICT subroutine is remarkably simple, consisting merely in the forward step of the deep neural network value prediction model. In contrast to our extremely simple procedure, TS-based methods require at this step to (approximately) sample from the model posterior to implement exploration. In SAU exploration is instead taken care of by the ACTION subroutine, which takes the values as inputs and either explores through sampling from a distribution around the predicted values (SAU-Sampling) or through an exploration bonus added to them (SAU-UCB). SAU then selects the action corresponding to the maximum of these perturbed values. The UPDATE for SAU is also quite simple, and consists in updating the neural network parameters to minimize the reward prediction error loss ln following action selection using SGD via backprop, or possibly its mini-batch version (which would then be carried out on a batch of (rn, an,xn) triplets previously stored in a memory buffer). UPDATE then updates the count and the SAU measure τan for the selected action an. We notice that the simplicity of SAU for deep contextual bandits is akin to the simplicity of epsilongreedy, for which exploration is also implemented in the ACTION subroutine (see Algorithms 5 in Appendix E). In fact, comparing the two algorithms it is clear that SAU can be used as a drop-in replacement for epsilon-greedy exploration, making it widely applicable. Algorithm 4 SAU for Deep Contextual Bandits (SAU-Neural-Sampling and UCB) 1: function PREDICT(xn) 2: Return predicted values {µ̂n,a}a∈K = µ(xn, θ̂n); 3: function ACTION({µ̂n,a}a∈K) 4: Exploration: Compute µ̃n,a ∼ N ( µ̂n,a, τ 2 a/na ) (SAU-Sampling) 5: or µ̃n,a = µ̂n,a + √ τa log n/na (SAU-UCB); 6: Return an = arg maxa({µ̃n,a}a∈K); 7: function UPDATE(rn, an,xn) 8: Compute prediction error en = rn − µ̂n,an and loss ln = 12 (rn − µ̂n,an) 2 9: Update model parameters to θ̂n+1 using SGD with gradients ∂ln∂θ (or mini-batch version); 10: Update exploration parameters: nan ← nan + 1, S2an ← S 2 an + e 2 n τ 2 an = S 2 an/nan ; 5.3 Empirical Evaluation of SAU on Deep Contextual Bandit Problems Benchmarks and baseline algorithms. Our empirical evaluation of SAU’s performance in the deep contextual bandit setting is based on the experiments by [18], who benchmarked the main TS-based approximate posterior sampling methods over a series of contextual bandit problems. We test SAU on the same contextual bandit problems against 4 competing algorithms consisting in the 4 best ranking algorithms identified by [18], which are: LinearPosterior (a closed-form Bayesian linear regression algorithm for exact posterior inference under the assumption of a linear contextual bandit [27]), LinearGreedy (epsilon-greedy exploration under the assumption of a linear contextual bandit), NeuralLinear (Bayesian linear regression on top of the last layer of a neural network trained with SGD [28]) and NeuralGreedy (a neural network with epsilon-greedy exploration trained with SGD). We neglected a direct comparison with NeuralUCB [29], since its scaling in memory and computational requirements make it quickly impractical for even moderately sized applications of practical interest. Moreover, its reported performance is substantially worse than SAU-UCB. Implementations of SAU. We implemented and tested 4 versions of SAU on the benchmarks in [18]. In the Tables below we refer to them a follows: Linear-SAU-S and Linear-SAU-UCB refer to a linear regression model using SAU-Sampling and SAU-UCB as exploration strategies, respectively. NeuralSAU-S and Neural-SAU-UCB refer to a neural network model trained with SGD using SAU-Sampling and SAU-UCB, respectively. Empirical evaluation on the Wheel Bandit. The Wheel Bandit Problem is a synthetic bandit designed by [18] to study the performance of bandit algorithms as a function of the need for exploration in the environment by varying a parameter δ ∈ [0, 1] that smoothly changes the importance of exploration. In particular, the difficulty of the problem increases with δ, since the problem is designed so that for δ close to 1 most contexts have the same optimal action, while only for a fraction 1− δ2 of contexts the optimal action is a different more rewarding action (see [18] for more details). In Appendix C, Table 2 quantifies the performance of SAU-Sampling and SAU-UCB in terms of cumulative regret in comparison to the 4 competing algorithms, and normalized to the performance of the Uniform baseline, which selects actions uniformly at random. There we can see that Neural-SAU-S is consistently the best algorithm with lower cumulative regret for a wide rage of the parameter δ. Only for very high values of δ (δ = 0.99) the baseline algorithm NeuralLiner starts to overtake it, but even in this case, another variant of SAU, SAU-Linear-S still maintains the lead in performance. Empirical evaluation on real-world Deep Contextual Bandit problems. Table 1 quantifies the performance of SAU-Sampling and SAU-UCB in comparison to the 4 competing baseline algorithms, and normalized to the performance of the Uniform baseline. These results show that a SAU algorithm is the best algorithm in each of the 7 benchmarks in terms of minimizing cumulative regret over all samples. Neural-SAU-S or Neural-SAU-UCB are the best combination 6 out of 7 times, and linear regression with SAU-UCB is the best on the bandit built from the Adult dataset. The next best algorithm in terms of minimizing cumulative regret is NeuralLinear [18], which incurs cumulative regret that on average is 32% higher than Neural-SAU-S and 34% higher than Neural-SAU-UCB. As already mentioned, thanks to their implementation efficiency SAU-based algorithms are much less computation intensive than TS-based algorithms. This is reflected in the remarkably shorter execution time: on average Neural-SAU-S and Neural-SAU-UCB run more than 10 time faster than NeuralLinear [18] (see Appendix Table 5 for details), also making them extremely scalable. 6 Conclusion and Discussion Existing methods to estimate uncertainty tend to be impractical for complex value function models like deep neural networks, either because exact posterior estimation become unfeasible, or due to how approximate algorithms coupled with deep learning training amplify estimation errors. In this paper, we have introduced Sample Average Uncertainty (SAU), a simple and efficient uncertainty measure for contextual bandit problems which sidesteps the mentioned problems plaguing Bayesian posterior methods. SAU only depends on the value prediction, in contrast to methods based on Thompson Sampling that instead require an estimate of the variability of the model parameters. As a result, SAU is immune to the negative effects that neural network parameterizations and optimization have on the quality of uncertainty estimation, resulting in reliable and robust exploration as demonstrated by our empirical studies. SAU’s implementation simplicity also makes it suitable as a drop-in replacement for epsilon-greedy action selection, resulting in a scalable exploration strategy that can be effortlessly deployed in large-scale and online contextual bandit scenarios. We also have provided theoretical justifications for SAU-based exploration by connecting SAU with posterior variance and mean-squared error estimation. However, the reasons why SAU is in practice consistently better than TS-based exploration in deep bandits is still not settled theoretically. We hypothesize that this might be due to two main reasons: (1) TS-based methods implement exploration by estimating the uncertainty of the internal model parameters, which might introduce estimation errors, while SAU directly estimates uncertainty at the model output; (2) in addition, the approximation error from the approximate posterior implementations of TS-based models might result in inefficient uncertainty measures of the internal model parameters. Because of the importance of contextual bandit algorithms for practical applications like for instance recommendation and ad servicing systems, we believe that it will be important to further theoretically refine these hypotheses to help mitigate the possible negative societal impacts that could result from deploying inefficient, miscalibrated or biased exploration algorithms. Another limitation of our work is that it developed SAU-based exploration in the specific and restricted case of bandits. Despite being an application of interest, we are excited and looking forward to further development that could extend methods based on SAU to more general sequential decision scenarios in RL beyond the bandit setting. Acknowledgements This work was partially supported by National Natural Science Foundation of China (No.11871459) and by Shanghai Municipal Science and Technology Major Project (No.2018SHZDZX01).
1. What is the focus and contribution of the paper regarding contextual bandits? 2. What are the strengths of the proposed approach, particularly its simplicity and consistency? 3. Do you have any concerns or questions about the proposed method, such as its independence from contextual information? 4. How does the reviewer assess the numerical evaluation and comparison with prior works? 5. Are there any clarification questions regarding the benchmark tasks and baselines used in the paper?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors propose a simple exploration method based on Sample Average Uncertainty (SAU) for contextual bandits. SAU is a frequentist approach - it directly estimates the uncertainty in reward prediction. This uncertainty can then be combined with a UCB-type or sampling-type approach. The authors show that SAU approximates mean-square error in Bernoulli bandits and in linear contextual bandits (under technical assumptions). Furthermore, the authors show empirically that the proposed SAU-UCB and SAU-sampling work well under a wide range of benchmark tasks in deep contextual bandits. Review The main contribution of the work is the development and analysis of a simple exploration technique that is shown to work well under a good range of benchmark tasks and with theoretical properties (for simpler cases). The biggest advantage of the proposed approach is its simplicity. Furthermore, the proposed algorithms seem to be fairly consistent among different tasks based on the numerical evaluation presented in the manuscript. SAU is a very simple method (which is its biggest advantage). It will be very helpful for readers if the authors can articulate why it works. Specifically, the exploration term in SAU is independent of the context. Why does it work ignoring the contextual information? Is homogeneity (line 181) the reason? In deep neural network settings, what is the intuition behind it? Is it because the uncertainty under contexts is aggregated into the uncertainty in prediction? The current numerical evaluation leverages the benchmark work in [18]. I took a quick look at [18] and have a couple of clarification questions. First, the regret reported in this paper is consistently higher than that reported in [18] for the same environment. Is that because the algorithms are run at a longer period in this paper? Second, in [18], variants of the same algorithm are evaluated and sometimes with much different performance. Which ones are adopted in the evaluation in this paper? For example, the current manuscript picked NeuarlLinear as a baseline where NeuralLInear has two variants in [18], which perform better in different tasks. Similarly, LinearPosterior is another baseline in the current manuscript, which has multiple variants in [18], some with lower regrets. Which one is used in the current manuscript? I do note that consistency is one of the key advantages of the proposed algorithms, especially Neural-SAU-S and Neural-SAU-UCB, based on the presented numeral evaluation.
NIPS
Title MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning Abstract This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. N/A This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. 1 Introduction This paper considers learning decision-making systems that exploit symmetries in the structure of the world. Deep reinforcement learning (DRL) is concerned with learning neural function approximators for decision making strategies. While DRL algorithms have been shown to solve complex, highdimensional problems [35, 34, 26, 25], they are often used in problems with large state-action spaces, and thus require many samples before convergence. Many tasks exhibit symmetries, easily recognized by a designer of a reinforcement learning system. Consider the classic control task of balancing a pole on a cart. Balancing a pole that falls to the right requires an equivalent, but mirrored, strategy to one that falls to the left. See Figure 1. In this paper, we exploit knowledge of such symmetries in the state-action space of Markov decision processes (MDPs) to reduce the size of the solution space. We use the notion of MDP homomorphisms [32, 30] to formalize these symmetries. Intuitively, an MDP homomorphism is a map between MDPs, preserving the essential structure of the original MDP, while removing redundancies in the problem description, i.e., equivalent state-action pairs. The removal of these redundancies results in a smaller state-action space, upon which we may more easily build a policy. While earlier work has been concerned with discovering an MDP homomorphism for a given MDP [32, 30, 27, 31, 6, 39], we are instead concerned with how to construct deep policies, satisfying the MDP homomorphism. We call these models MDP homomorphic networks. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. MDP homomorphic networks use experience from one state-action pair to improve the policy for all ‘equivalent’ pairs. See Section 2.1 for a definition. They do this by tying the weights for two states if they are equivalent under a transformation chosen by the designer, such as s and L[s] in Figure 1. Such weight-tying follows a similar principle to the use of convolutional networks [18], which are equivariant to translations of the input [11]. In particular, when equivalent state-action pairs can be related by an invertible transformation, which we refer to as group-structured, we show that the policy network belongs to the class of group-equivariant neural networks [11, 46].Equivariant neural networks are a class of neural network, which have built-in symmetries [11, 12, 46, 43, 41]. They are a generalization of convolutional neural networks—which exhibit translation symmetry—to transformation groups (group-structured equivariance) and transformation semigroups [47] (semigroup-structured equivariance). They have been shown to reduce sample complexity for classification tasks [46, 44] and also to be universal approximators of symmetric functions1 [48]. We borrow from the literature on group equivariant networks to design policies that tie weights for state-action pairs given their equivalence classes, with the goal of reducing the number of samples needed to find good policies. Furthermore, we can use the MDP homomorphism property to design not just policy networks, but also value networks and even environment models. MDP homomorphic networks are agnostic to the type of model-free DRL algorithm, as long as an appropriate transformation on the output is given. In this paper we focus on equivariant policy and invariant value networks. See Figure 1 for an example policy. An additional contribution of this paper is a novel numerical way of finding equivariant layers for arbitrary transformation groups. The design of equivariant networks imposes a system of linear constraint equations on the linear/convolutional layers [12, 11, 46, 43]. Solving these equations has typically been done analytically by hand, which is a time-consuming and intricate process, barring rapid prototyping. Rather than requiring analytical derivation, our method only requires that the system designer specify input and output transformation groups of the form {state transformation, policy transformation}. We provide Pytorch [29] implementations of our equivariant network layers, and implementations of the transformations used in this paper. We also experimentally demonstrate that exploiting equivalences in MDPs leads to faster learning of policies for DRL. Our contributions are two-fold: • We draw a connection between MDP homomorphisms and group equivariant networks, proposing MDP homomorphic networks to exploit symmetries in decision-making problems; • We introduce a numerical algorithm for the automated construction of equivariant layers. 2 Background Here we outline the basics of the theory behind MDP homomorphisms and equivariance. We begin with a brief outline of the concepts of equivalence, invariance, and equivariance, followed by a review of the Markov decision process (MDP). We then review the MDP homomorphism, which builds a map between ‘equivalent’ MDPs. 2.1 Equivalence, Invariance, and Equivariance Equivalence If a function f : X → Y maps two inputs x, x′ ∈ X to the same value, that is f(x) = f(x′), then we say that x and x′ are f -equivalent. For instance, two states s, s′ leading to the 1Specifically group equivariant networks are universal approximators to functions symmetric under linear representations of compact groups. same optimal value V ∗(s) = V ∗(s′) would be V ∗-equivalent or optimal value equivalent [30]. An example of two optimal value equivalent states would be states s and L[s] in the CartPole example of Figure 1. The set of all points f -equivalent to x is called the equivalence class of x. Invariance and Symmetries Typically there exist very intuitive relationships between the points in an equivalence class. In the CartPole example of Figure 1 this relationship is a horizontal flip about the vertical axis. This is formalized with the transformation operator Lg : X → X , where g ∈ G and G is a mathematical group. If Lg satisfies f(x) = f(Lg[x]), for all g ∈ G, x ∈ X , (1) then we say that f is invariant or symmetric to Lg and that {Lg}g∈G is a set of symmetries of f . We can see that for the invariance equation to be satisfied, it must be that Lg can only map x to points in its equivalence class. Note that in abstract algebra for Lg to be a true transformation operator, G must contain an identity operation; that is Lg[x] = x for some g and all x. An interesting property of transformation operators which leave f invariant, is that they can be composed and still leave f invariant, so Lg ◦ Lh is also a symmetry of f for all g, h ∈ G. In abstract algebra, this property is known as a semigroup property. If Lg is always invertible, this is called a group property. In this work, we experiment with group-structured transformation operators. For more information, see [14]. One extra helpful concept is that of orbits. If f is invariant to Lg , then it is invariant along the orbits of G. The orbit Ox of point x is the set of points reachable from x via transformation operator Lg: Ox , {Lg[x] ∈ X |g ∈ G}. (2) Equivariance A related notion to invariance is equivariance. Given a transformation operator Lg : X → X and a mapping f : X → Y , we say that f is equivariant [11, 46] to the transformation if there exists a second transformation operator Kg : Y → Y in the output space of f such that Kg[f(x)] = f(Lg[x]), for all g ∈ G, x ∈ X . (3) The operators Lg and Kg can be seen to describe the same transformation, but in different spaces. In fact, an equivariant map can be seen to map orbits to orbits. We also see that invariance is a special case of equivariance, if we set Kg to the identity operator for all g. Given Lg and Kg, we can solve for the collection of equivariant functions f satisfying the equivariance constraint. Moreover, for linear transformation operators and linear f a rich theory already exists in which f is referred to as an intertwiner [12]. In the equivariant deep learning literature, neural networks are built from interleaving intertwiners and equivariant nonlinearities. As far as we are aware, most of these methods are hand-designed per pair of transformation operators, with the exception of [13]. In this paper, we introduce a computational method to solve for intertwiners given a pair of transformation operators. 2.2 Markov Decision Processes A Markov decision process (MDP) is a tuple (S,A, R, T, γ), with state space S, action space A, immediate reward function R : S × A → R, transition function T : S × A × S → R≥0, and discount factor γ ∈ [0, 1]. The goal of solving an MDP is to find a policy π ∈ Π, π : S ×A → R≥0 (written π(a|s)), where π normalizes to unity over the action space, that maximizes the expected return Rt = Eπ[ ∑T k=0 γ krt+k+1]. The expected return from a state s under a policy π is given by the value function V π. A related object is the Q-value Qπ, the expected return from a state s after taking action a under π. V π and Qπ are governed by the well-known Bellman equations [5] (see Supplementary). In an MDP, optimal policies π∗ attain an optimal value V ∗ and corresponding Q-value given by V ∗(s) = max π∈Π V π(s) and Q∗(s) = max π∈Π Qπ(s). MDP with Symmetries Symmetries can appear in MDPs. For instance, in Figure 2 CartPole has a reflection symmetry about the vertical axis. Here we define an MDP with symmetries. In an MDP with symmetries there is a set of transformations on the state-action space, which leaves the reward function and transition operator invariant. We define a state transformation and a state-dependent action transformation as Lg : S → S and Ksg : A → A respectively. Invariance of the reward function and transition function is then characterized as R(s, a) = R(Lg[s],K s g [a]) for all g ∈ G, s ∈ S, a ∈ A (4) T (s′|s, a) = T (Lg[s′]|Lg[s],Ksg [a]) for all g ∈ G, s ∈ S, a ∈ A. (5) Written like this, we see that in an MDP with symmetries the reward function and transition operator are invariant along orbits defined by the transformations (Lg,Ksg). MDP Homomorphisms MDPs with symmetries are closely related to MDP homomorphisms, as we explain below. First we define the latter. An MDP homomorphism h [32, 30] is a mapping from one MDPM = (S,A, R, T, γ) to another M̄ = (S̄, Ā, R̄, T̄ , γ) defined by a surjective map from the state-action space S ×A to an abstract state-action space S̄ × Ā. In particular, h consists of a tuple of surjective maps (σ, {αs|s ∈ S}), where we have the state map σ : S → S̄ and the state-dependent action map αs : A → Ā. These maps are built to satisfy the following conditions R̄(σ(s), αs(a)) , R(s, a) for all s ∈ S, a ∈ A, (6) T̄ (σ(s′)|σ(s), αs(a)) , ∑ s′′∈σ−1(s′) T (s′′|s, a) for all s, s′ ∈ S, a ∈ A. (7) An exact MDP homomorphism provides a model equivalent abstraction [20]. Given an MDP homomorphism h, two state-action pairs (s, a) and (s′, a′) are called h-equivalent if σ(s) = σ(s′) and αs(a) = αs′(a′). Symmetries and MDP homomorphisms are connected in a natural way: If an MDP has symmetries Lg and Kg, the above equations (4) and (5) hold. This means that we can define a corresponding MDP homomorphism, which we define next. Group-structured MDP Homomorphisms Specifically, for an MDP with symmetries, we can define an abstract state-action space, by mapping (s, a) pairs to (a representative point of) their equivalence class (σ(s), αs(a)). That is, state-action pairs and their transformed version are mapped to the same abstract state in the reduced MDP: (σ(s), αs(a)) = ( σ(Lg[s]), αLg [s](K s g [a]) ) ∀g ∈ G, s ∈ S, a ∈ A (8) In this case, we call the resulting MDP homomorphism group structured. In other words, all the state-action pairs in an orbit defined by a group transformation are mapped to the same abstract state by a group-structured MDP homomorphism. Optimal Value Equivalence and Lifted Policies h-equivalent state-action pairs share the same optimal Q-value and optimal value function [30]. Furthermore, there exists an abstract optimal Q-value Q̄∗ and abstract optimal value function V̄ ∗, such that Q∗(s, a) = Q̄∗(σ(s), αs(a)) and V ∗(s) = V̄ ∗(σ(s)). This is known as optimal value equivalence [30]. Policies can thus be optimized in the simpler abstract MDP. The optimal abstract policy π̄(ā|σ(s)) can then be pulled back to the original MDP using a procedure called lifting 2. The lifted policy is given in Equation 9. A lifted optimal abstract policy is also an optimal policy in the original MDP [30]. Note that while other lifted policies exist, we follow [30, 32] and choose the lifting that divides probability mass uniformly over the preimage: π↑(a|s) , π̄(ā|σ(s)) |{a ∈ α−1s (ā)}| , for any s ∈ S and a ∈ α−1s (ā). (9) 3 Method The focus of the next section is on the design of MDP homomorphic networks—policy networks and value networks obeying the MDP homomorphism. In the first section of the method, we show that any 2Note that we use the terminology lifting to stay consistent with [30]. policy network satisfying the MDP homomorphism property must be an equivariant neural network. In the second part of the method, we introduce a novel numerical technique for constructing groupequivariant networks, based on the transformation operators defining the equivalence state-action pairs under the MDP homomorphism. 3.1 Lifted Policies Are Invariant Lifted policies in symmetric MDPs with group-structured symmetries are invariant under the group of symmetries. Consider the following: Take an MDP with symmetries defined by transformation operators (Lg,Ksg) for g ∈ G. Now, if we take s′ = Lg[s] and a′ = Ksg [a] for any g ∈ G, (s′, a′) and (s, a) are h-equivalent under the corresponding MDP homomorphism h = (σ, {αs|s ∈ S}). So π↑(a|s) = π̄(αs(a)|σ(s)) |{a ∈ α−1s (ā)}| = π̄(αs′(a ′)|σ(s′)) |{a′ ∈ α−1s′ (ā)}| = π↑(a′|s′), (10) for all s ∈ S, a ∈ A and g ∈ G. In the first equality we have used the definition of the lifted policy. In the second equality, we have used the definition of h-equivalent state-action pairs, where σ(s) = σ(Lg(s)) and αs(a) = αs′(a′). In the third equality, we have reused the definition of the lifted policy. Thus we see that, written in this way, the lifted policy is invariant under state-action transformations (Lg,Ksg). This equation is very general and applies for all group-structured stateaction transformations. For a finite action space, this statement of invariance can be re-expressed as a statement of equivariance, by considering the vectorized policy. Invariant Policies On Finite Action Spaces Are Equivariant Vectorized Policies For convenience we introduce a vector of probabilities for each of the discrete actions under the policy π(s) , [π(a1|s), π(a2|s), ..., π(aN |s)]> , (11) where a1, ..., aN are the N possible discrete actions in action spaceA. The action transformation Ksg maps actions to actions invertibly. Thus applying an action transformation to the vectorized policy permutes the elements. We write the corresponding permutation matrix as Kg . Note that K−1g π(s) , [ π(Ksg [a1]|s), π(Ksg [a2]|s), ..., π(Ksg [aN ]|s) ]> , (12) where writing the inverse K−1g instead of Kg is required to maintain the property KgKh = Kgh. The invariance of the lifted policy can then be written as π↑(s) = K−1g π↑(Lg[s]), which can be rearranged to the equivariance equation Kgπ↑(s) = π↑(Lg[s]) for all g ∈ G, s ∈ S, a ∈ A. (13) This equation shows that the lifted policy must satisfy an equivariance constraint. In deep learning, this has already been well-explored in the context of supervised learning [11, 12, 46, 47, 43]. Next, we present a novel way to construct such networks. 3.2 Building MDP Homomorphic Networks Our goal is to build neural networks that follow Eq. 13; that is, we wish to find neural networks that are equivariant under a set of state and policy transformations. Equivariant networks are common in supervised learning [11, 12, 46, 47, 43, 41]. For instance, in semantic segmentation shifts and rotations of the input image result in shifts and rotations in the segmentation. A neural network consisting of only equivariant layers and non-linearities is equivariant as a whole, too3 [11]. Thus, once we know how to build a single equivariant layer, we can simply stack such layers together. Note that this is true regardless of the representation of the group, i.e. this works for spatial transformations of the input, feature map permutations in intermediate layers, and policy transformations in the output layer. For the experiments presented in this paper, we use the same group representations for the intermediate layers as for the output, i.e. permutations. For finite groups, such as cyclic groups or permutations, pointwise nonlinearities preserve equivariance [11]. In the past, learnable equivariant layers were designed by hand for each transformation group individually [11, 12, 46, 47, 44, 43, 41]. This is time-consuming and laborious. Here we present a novel way to build learnable linear layers that satisfy equivariance automatically. Equivariant Layers We begin with a single linear layer z′ = Wz + b, where W ∈ RDout×Din and b ∈ RDin is a bias. To simplify the math, we merge the bias into the weights so W 7→ [W,b] and z 7→ [z, 1]>. We denote the space of the augmented weights asWtotal. For a given pair of linear group transformation operators in matrix form (Lg,Kg), where Lg is the input transformation and Kg is the output transformation, we then have to solve the equation KgWz = WLgz, for all g ∈ G, z ∈ RDin+1. (14) Since this equation is true for all z we can in fact drop z entirely. Our task now is to find all weights W which satisfy Equation 14. We label this space of equivariant weights asW , defined as W , {W ∈ Wtotal | KgW = WLg, for all g ∈ G}, (15) again noting that we have dropped z. To find the spaceW notice that for each g ∈ G the constraint KgW = WLg is in fact linear in W. Thus, to findW we need to solve a set of linear equations in W. For this we introduce a construction, which we call a symmetrizer S(W). The symmetrizer is S(W) , 1 |G| ∑ g∈G K−1g WLg. (16) S has three important properties, of which proofs are provided in Appendix A. First, S(W) is symmetric (S(W) ∈ W). Second, S fixes any symmetric W: (W ∈ W =⇒ S(W) = W). These properties show that S projects arbitrary W ∈ Wtotal to the equivariant subspaceW . Since W is the solution set for a set of simultaneous linear equations, W is a linear subspace of the space of all possible weights Wtotal. Thus each W ∈ W can be parametrized as a linear combination of basis weights {Vi}ri=1, where r is the rank of the subspace and span({Vi}ri=1) = W . To find as basis for W, we take a Gram-Schmidt orthogonalization approach. We first sample weights in the total spaceWtotal and then project them into the equivariant subspace with the symmetrizer. We do this for multiple weight matrices, which we then stack and feed through a singular value decomposition to find a basis for the equivariant space. This procedure is outlined in Algorithm 1. Any equivariant layer can then be written as a linear combination of bases W = r∑ i=1 ciVi, (17) where the ci’s are learnable scalar coefficients, r is the rank of the equivariant space, and the matrices Vi are the basis vectors, formed from the reshaped right-singular vectors in the SVD. An example is shown in Figure 3. To run this procedure, all that is needed are the transformation operators Lg and Kg . Note we do not need to know the explicit transformation matrices, but just to be able to perform the mappings W 7→WLg and W 7→ K−1g W. For instance, some matrix Lg rotates an image patch, but we could equally implement WLg using a built-in rotation function. Code is available 4. 4 Experiments We evaluated three flavors of MDP homomorphic network—an MLP, a CNN, and an equivariant feature extractor—on three RL tasks that exhibit group symmetry: CartPole, a grid world, and Pong. 3See Appendix B for more details. 4https://github.com/ElisevanderPol/symmetrizer/ Algorithm 1 Equivariant layer construction 1: Sample N weight matrices W1,W2, ...,WN ∼ N (W; 0, I) for N ≥ dim(Wtotal) 2: Symmetrize samples: W̄i = S(Wi) for i = 1, ..., N 3: Vectorize samples and stack as W̄ = [vec(W̄1), vec(W̄2), ...] 4: Apply SVD: W̄ = UΣV> 5: Keep first r = rank(W̄) right-singular vectors (columns of V) and unvectorize to shape of Wi We use RLPYT [36] for the algorithms. Hyperparameters (and the range considered), architectures, and group implementation details are in the Supplementary Material. Code is available 5. 4.1 Environments For each environment we show S and A with respective representations of the group transformations. CartPole In the classic pole balancing task [3], we used a two-element group of reflections about the y-axis. We used OpenAI’s Cartpole-v1 [7] implementation, which has a 4-dimensional observation vector: (cart position x, pole angle θ, cart velocity ẋ, pole velocity θ̇). The (discrete) action space consists of applying a force left and right (←,→). We chose this example for its simple symmetries. Grid world We evaluated on a toroidal 7-by-7 predator-prey grid world with agent-centered coordinates. The prey and predator are randomly placed at the start of each episode, lasting a maximum of 100 time steps. The agent’s goal is to catch the prey, which takes a step in a random compass direction with probability 0.15 and stands still otherwise. Upon catching the prey, the agent receives a reward of +1, and -0.1 otherwise. The observation is a 21× 21 binary image identifying the position of the agent in the center and the prey in relative coordinates. See Figure 6a. This environment was chosen due to its four-fold rotational symmetry. Pong We evaluated on the RLPYT [36] implementation of Pong. In our experiments, the observation consisted of the 4 last observed frames, with upper and lower margins cut off and downscaled to an 80 × 80 grayscale image. In this setting, there is a flip symmetry over the horizontal axis: if we flip the observations, the up and down actions also flip. A curious artifact of Pong is that it has duplicate (up, down) actions, which means that to simplify matters, we mask out the policy values for the second pair of (up, down) actions. We chose Pong because of its higher dimensional state space. Finally, for Pong we additionally compare to two data augmentation baselines: stochastic data augmentation, where for each state, action pair we randomly transform them or not before feeding them to the network, and the second an equivariant version of [16] and similar to [35], where both state and transformed state are input to the network. The output of the transformed state is appropriately transformed, and both policies are averaged. 4.2 Models We implemented MDP homomorphic networks on top of two base architectures: MLP and CNN (exact architectures in Supplementary). We further experimented with an equivariant feature extractor, appended by a non-equivariant network, to isolate where equivariance made the greatest impact. Basis Networks We call networks whose weights are linear combinations of basis weights basis networks. As an ablation study on all equivariant networks, we sought to measure the effects of the basis training dynamics. We compared an equivariant basis against a pure nullspace basis, i.e. an explicitly non-symmetric basis using the right-null vectors from the equivariant layer construction, and a random basis, where we skip the symmetrization step in the layer construction and use the full rank basis. Unless stated otherwise, we reduce the number of ‘channels’ in the basis networks compared to the regular networks by dividing by the square root of the group size, ending up with a comparable number of trainable parameters. 5https://github.com/ElisevanderPol/mdp-homomorphic-networks 4.3 Results and Discussion We show training curves for CartPole in 4a-4b, Pong in Figure 4c and for the grid world in Figure 6. Across all experiments we observed that the MDP homomorphic network outperforms both the non-equivariant basis networks and the standard architectures, in terms of convergence speed. This confirms our motivations that building symmetry-preserving policy networks leads to faster convergence. Additionally, when compared to the data augmentation baselines in Figure 5, using equivariant networks is more beneficial. This is consistent with other results in the equivariance literature [4, 42, 44, 46]. While data augmentation can be used to create a larger dataset by exploiting symmetries, it does not directly lead to effective parameter sharing (as our approach does). Note, in Pong we only train the first 15 million frames to highlight the difference in the beginning; in constrast, a typical training duration is 50-200 million frames [25, 36]. For our ablation experiment, we wanted to control for the introduction of bases. It is not clear a priori that a network with a basis has the same gradient descent dynamics as an equivalent ‘basisless’ network. We compared equivariant, non-equivariant, and random bases, as mentioned above. We found the equivariant basis led to the fastest convergence. Figures 4a and 4c show that for CartPole and Pong the nullspace basis converged faster than the random basis. In the grid world there was no clear winner between the two. This is a curious result, requiring deeper investigation in a follow-up. For a third experiment, we investigated what happens if we sacrifice complete equivariance of the policy. This is attractive because it removes the need to find a transformation operator for a flattened output feature map. Instead, we only maintained an equivariant feature extractor, compared against a basic CNN feature extractor. The networks built on top of these extractors were MLPs. The results, in Figure 4c, are two-fold: 1) Basis feature extractors converge faster than standard CNNs, and 2) the equivariant feature extractor has fastest convergence. We hypothesize the equivariant feature extractor is fastest as it is easiest to learn an equivariant policy from equivariant features. We have additionally compared an equivariant feature extractor to a regular convolutional network on the Atari game Breakout, where the difference between the equivariant network and the regular network is much less pronounced. For details, see Appendix C. 5 Related Work Past work on MDP homomorphisms has often aimed at discovering the map itself based on knowledge of the transition and reward function, and under the assumption of enumerable state spaces [30, 31, 32, 38]. Other work relies on learning the map from sampled experience from the MDP [39, 6, 23]. Exactly computing symmetries in MDPs is graph isomorphism complete [27] even with full knowledge of the MDP dynamics. Rather than assuming knowledge of the transition and reward function, and small and enumerable state spaces, in this work we take the inverse view: we assume that we have an easily identifiable transformation of the joint state–action space and exploit this knowledge to learn more efficiently. Exploiting symmetries in deep RL has been previously explored in the game of Go, in the form of symmetric filter weights [33, 8] or data augmentation [35]. Other work on data augmentation increases sample efficiency and generalization on well-known benchmarks by augmenting existing data points state transformations such as random translations, cutout, color jitter and random convolutions [16, 9, 17, 19]. In contrast, we encode symmetries into the neural network weights, leading to more parameter sharing. Additionally, such data augmentation approaches tend to take the invariance view, augmenting existing data with state transformations that leave the state’s Q-values intact [16, 9, 17, 19] (the exception being [21] and [24], who augment trajectories rather than just states). Similarly, permutation invariant networks are commonly used in approaches to multi-agent RL [37, 22, 15]. We instead take the equivariance view, which accommodates a much larger class of symmetries that includes transformations on the action space. Abdolhosseini et al. [1] have previously manually constructed an equivariant network for a single group of symmetries in a single RL problem, namely reflections in a bipedal locomotion task. Our MDP homomorphic networks allow for automated construction of networks that are equivariant under arbitrary discrete groups and are therefore applicable to a wide variety of problems. From an equivariance point-of-view, the automatic construction of equivariant layers is new. [12] comes close to specifying a procedure, outlining the system of equations to solve, but does not specify an algorithm. The basic theory of group equivariant networks was outlined in [11, 12] and [10], with notable implementations to 2D roto-translations on grids [46, 43, 41] and 3D roto-translations on grids [45, 44, 42]. All of these works have relied on hand-constructed equivariant layers. 6 Conclusion This paper introduced MDP homomorphic networks, a family of deep architectures for reinforcement learning problems where symmetries have been identified. MDP homomorphic networks tie weights over symmetric state-action pairs. This weight-tying leads to fewer degrees-of-freedom and in our experiments we found that this translates into faster convergence. We used the established theory of MDP homomorphisms to motivate the use of equivariant networks, thus formalizing the connection between equivariant networks and symmetries in reinforcement learning. As an innovation, we also introduced the first method to automatically construct equivariant network layers, given a specification of the symmetries in question, thus removing a significant implementational obstacle. For future work, we want to further understand the symmetrizer and its effect on learning dynamics, as well as generalizing to problems that are not fully symmetric. 7 Acknowledgments and Funding Disclosure Elise van der Pol was funded by Robert Bosch GmbH. Daniel Worrall was funded by Philips. F.A.O. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758824 —INFLUENCE). Max Welling reports part-time employment at Qualcomm AI Research. 8 Broader Impact The goal of this paper is to make (deep) reinforcement learning techniques more efficient at solving Markov decision processes (MDPs) by making use of prior knowledge about symmetries. We do not expect the particular algorithm we develop to lead to immediate societal risks. However, Markov decision processes are very general, and can e.g. be used to model problems in autonomous driving, smart grids, and scheduling. Thus, solving such problems more efficiently can in the long run cause positive or negative societal impact. For example, making transportation or power grids more efficient, thereby making better use of scarce resources, would be a significantly positive impact. Other potential applications, such as in autonomous weapons, pose a societal risk [28]. Like many AI technologies, when used in automation, our technology can have a positive impact (increased productivity) and a negative impact (decreased demand) on labor markets. More immediately, control strategies learned using RL techniques are hard to verify and validate. Without proper precaution (e.g. [40]), employing such control strategies on physical systems thus run the risk of causing accidents involving people, e.g. due to reward misspecification, unsafe exploration, or distributional shift [2].
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to incorporate prior knowledge of symmetries in MDPs? 3. What are the weaknesses of the paper, especially regarding the simplicity and specificity of the experiment environments/tasks? 4. How might the method be applied to more advanced RL tasks with a greater representation-learning component? 5. What is a more general discussion of symmetries in other common research environments, and what kind of symmetries can/cannot be represented?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces a method to account for symmetries in the state/observation and action spaces of MDPs when constructing deep NN policies for RL. It provides a fairly general approach for constructing trainable deep NNs which are equivariant under known symmetries, and shows in experiments that incorporating this prior knowledge can accelerate learning. Strengths Strengths of this work include the contribution of a novel and fairly general-purpose method for constructing deep NNs possessing equivariant properties, including convolution layers. This is an important area of study, as it has remained difficult to imbue deep RL agents with prior knowledge of such MDP structures. The experiments demonstrate faster learning by equivariant networks for the cases studied. Weaknesses The main weakness of the paper is the simplicity and specificity of the experiment environments/tasks. The method would be much more convincing if applied to an RL task with a more challenging representation-learning component. Examples could include a more advanced Atari game, DMControl from vision, or even better some 3-D visual environment. A more general discussion of symmetries in other common research environments, and what kind of symmetries can/cannot be represented, would be useful. More discussion about why the nullspace- and random-bases learning methods outperform the baseline convolutional approach in Pong would be useful, as this already provides most of the gain in performance.
NIPS
Title MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning Abstract This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. N/A This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. 1 Introduction This paper considers learning decision-making systems that exploit symmetries in the structure of the world. Deep reinforcement learning (DRL) is concerned with learning neural function approximators for decision making strategies. While DRL algorithms have been shown to solve complex, highdimensional problems [35, 34, 26, 25], they are often used in problems with large state-action spaces, and thus require many samples before convergence. Many tasks exhibit symmetries, easily recognized by a designer of a reinforcement learning system. Consider the classic control task of balancing a pole on a cart. Balancing a pole that falls to the right requires an equivalent, but mirrored, strategy to one that falls to the left. See Figure 1. In this paper, we exploit knowledge of such symmetries in the state-action space of Markov decision processes (MDPs) to reduce the size of the solution space. We use the notion of MDP homomorphisms [32, 30] to formalize these symmetries. Intuitively, an MDP homomorphism is a map between MDPs, preserving the essential structure of the original MDP, while removing redundancies in the problem description, i.e., equivalent state-action pairs. The removal of these redundancies results in a smaller state-action space, upon which we may more easily build a policy. While earlier work has been concerned with discovering an MDP homomorphism for a given MDP [32, 30, 27, 31, 6, 39], we are instead concerned with how to construct deep policies, satisfying the MDP homomorphism. We call these models MDP homomorphic networks. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. MDP homomorphic networks use experience from one state-action pair to improve the policy for all ‘equivalent’ pairs. See Section 2.1 for a definition. They do this by tying the weights for two states if they are equivalent under a transformation chosen by the designer, such as s and L[s] in Figure 1. Such weight-tying follows a similar principle to the use of convolutional networks [18], which are equivariant to translations of the input [11]. In particular, when equivalent state-action pairs can be related by an invertible transformation, which we refer to as group-structured, we show that the policy network belongs to the class of group-equivariant neural networks [11, 46].Equivariant neural networks are a class of neural network, which have built-in symmetries [11, 12, 46, 43, 41]. They are a generalization of convolutional neural networks—which exhibit translation symmetry—to transformation groups (group-structured equivariance) and transformation semigroups [47] (semigroup-structured equivariance). They have been shown to reduce sample complexity for classification tasks [46, 44] and also to be universal approximators of symmetric functions1 [48]. We borrow from the literature on group equivariant networks to design policies that tie weights for state-action pairs given their equivalence classes, with the goal of reducing the number of samples needed to find good policies. Furthermore, we can use the MDP homomorphism property to design not just policy networks, but also value networks and even environment models. MDP homomorphic networks are agnostic to the type of model-free DRL algorithm, as long as an appropriate transformation on the output is given. In this paper we focus on equivariant policy and invariant value networks. See Figure 1 for an example policy. An additional contribution of this paper is a novel numerical way of finding equivariant layers for arbitrary transformation groups. The design of equivariant networks imposes a system of linear constraint equations on the linear/convolutional layers [12, 11, 46, 43]. Solving these equations has typically been done analytically by hand, which is a time-consuming and intricate process, barring rapid prototyping. Rather than requiring analytical derivation, our method only requires that the system designer specify input and output transformation groups of the form {state transformation, policy transformation}. We provide Pytorch [29] implementations of our equivariant network layers, and implementations of the transformations used in this paper. We also experimentally demonstrate that exploiting equivalences in MDPs leads to faster learning of policies for DRL. Our contributions are two-fold: • We draw a connection between MDP homomorphisms and group equivariant networks, proposing MDP homomorphic networks to exploit symmetries in decision-making problems; • We introduce a numerical algorithm for the automated construction of equivariant layers. 2 Background Here we outline the basics of the theory behind MDP homomorphisms and equivariance. We begin with a brief outline of the concepts of equivalence, invariance, and equivariance, followed by a review of the Markov decision process (MDP). We then review the MDP homomorphism, which builds a map between ‘equivalent’ MDPs. 2.1 Equivalence, Invariance, and Equivariance Equivalence If a function f : X → Y maps two inputs x, x′ ∈ X to the same value, that is f(x) = f(x′), then we say that x and x′ are f -equivalent. For instance, two states s, s′ leading to the 1Specifically group equivariant networks are universal approximators to functions symmetric under linear representations of compact groups. same optimal value V ∗(s) = V ∗(s′) would be V ∗-equivalent or optimal value equivalent [30]. An example of two optimal value equivalent states would be states s and L[s] in the CartPole example of Figure 1. The set of all points f -equivalent to x is called the equivalence class of x. Invariance and Symmetries Typically there exist very intuitive relationships between the points in an equivalence class. In the CartPole example of Figure 1 this relationship is a horizontal flip about the vertical axis. This is formalized with the transformation operator Lg : X → X , where g ∈ G and G is a mathematical group. If Lg satisfies f(x) = f(Lg[x]), for all g ∈ G, x ∈ X , (1) then we say that f is invariant or symmetric to Lg and that {Lg}g∈G is a set of symmetries of f . We can see that for the invariance equation to be satisfied, it must be that Lg can only map x to points in its equivalence class. Note that in abstract algebra for Lg to be a true transformation operator, G must contain an identity operation; that is Lg[x] = x for some g and all x. An interesting property of transformation operators which leave f invariant, is that they can be composed and still leave f invariant, so Lg ◦ Lh is also a symmetry of f for all g, h ∈ G. In abstract algebra, this property is known as a semigroup property. If Lg is always invertible, this is called a group property. In this work, we experiment with group-structured transformation operators. For more information, see [14]. One extra helpful concept is that of orbits. If f is invariant to Lg , then it is invariant along the orbits of G. The orbit Ox of point x is the set of points reachable from x via transformation operator Lg: Ox , {Lg[x] ∈ X |g ∈ G}. (2) Equivariance A related notion to invariance is equivariance. Given a transformation operator Lg : X → X and a mapping f : X → Y , we say that f is equivariant [11, 46] to the transformation if there exists a second transformation operator Kg : Y → Y in the output space of f such that Kg[f(x)] = f(Lg[x]), for all g ∈ G, x ∈ X . (3) The operators Lg and Kg can be seen to describe the same transformation, but in different spaces. In fact, an equivariant map can be seen to map orbits to orbits. We also see that invariance is a special case of equivariance, if we set Kg to the identity operator for all g. Given Lg and Kg, we can solve for the collection of equivariant functions f satisfying the equivariance constraint. Moreover, for linear transformation operators and linear f a rich theory already exists in which f is referred to as an intertwiner [12]. In the equivariant deep learning literature, neural networks are built from interleaving intertwiners and equivariant nonlinearities. As far as we are aware, most of these methods are hand-designed per pair of transformation operators, with the exception of [13]. In this paper, we introduce a computational method to solve for intertwiners given a pair of transformation operators. 2.2 Markov Decision Processes A Markov decision process (MDP) is a tuple (S,A, R, T, γ), with state space S, action space A, immediate reward function R : S × A → R, transition function T : S × A × S → R≥0, and discount factor γ ∈ [0, 1]. The goal of solving an MDP is to find a policy π ∈ Π, π : S ×A → R≥0 (written π(a|s)), where π normalizes to unity over the action space, that maximizes the expected return Rt = Eπ[ ∑T k=0 γ krt+k+1]. The expected return from a state s under a policy π is given by the value function V π. A related object is the Q-value Qπ, the expected return from a state s after taking action a under π. V π and Qπ are governed by the well-known Bellman equations [5] (see Supplementary). In an MDP, optimal policies π∗ attain an optimal value V ∗ and corresponding Q-value given by V ∗(s) = max π∈Π V π(s) and Q∗(s) = max π∈Π Qπ(s). MDP with Symmetries Symmetries can appear in MDPs. For instance, in Figure 2 CartPole has a reflection symmetry about the vertical axis. Here we define an MDP with symmetries. In an MDP with symmetries there is a set of transformations on the state-action space, which leaves the reward function and transition operator invariant. We define a state transformation and a state-dependent action transformation as Lg : S → S and Ksg : A → A respectively. Invariance of the reward function and transition function is then characterized as R(s, a) = R(Lg[s],K s g [a]) for all g ∈ G, s ∈ S, a ∈ A (4) T (s′|s, a) = T (Lg[s′]|Lg[s],Ksg [a]) for all g ∈ G, s ∈ S, a ∈ A. (5) Written like this, we see that in an MDP with symmetries the reward function and transition operator are invariant along orbits defined by the transformations (Lg,Ksg). MDP Homomorphisms MDPs with symmetries are closely related to MDP homomorphisms, as we explain below. First we define the latter. An MDP homomorphism h [32, 30] is a mapping from one MDPM = (S,A, R, T, γ) to another M̄ = (S̄, Ā, R̄, T̄ , γ) defined by a surjective map from the state-action space S ×A to an abstract state-action space S̄ × Ā. In particular, h consists of a tuple of surjective maps (σ, {αs|s ∈ S}), where we have the state map σ : S → S̄ and the state-dependent action map αs : A → Ā. These maps are built to satisfy the following conditions R̄(σ(s), αs(a)) , R(s, a) for all s ∈ S, a ∈ A, (6) T̄ (σ(s′)|σ(s), αs(a)) , ∑ s′′∈σ−1(s′) T (s′′|s, a) for all s, s′ ∈ S, a ∈ A. (7) An exact MDP homomorphism provides a model equivalent abstraction [20]. Given an MDP homomorphism h, two state-action pairs (s, a) and (s′, a′) are called h-equivalent if σ(s) = σ(s′) and αs(a) = αs′(a′). Symmetries and MDP homomorphisms are connected in a natural way: If an MDP has symmetries Lg and Kg, the above equations (4) and (5) hold. This means that we can define a corresponding MDP homomorphism, which we define next. Group-structured MDP Homomorphisms Specifically, for an MDP with symmetries, we can define an abstract state-action space, by mapping (s, a) pairs to (a representative point of) their equivalence class (σ(s), αs(a)). That is, state-action pairs and their transformed version are mapped to the same abstract state in the reduced MDP: (σ(s), αs(a)) = ( σ(Lg[s]), αLg [s](K s g [a]) ) ∀g ∈ G, s ∈ S, a ∈ A (8) In this case, we call the resulting MDP homomorphism group structured. In other words, all the state-action pairs in an orbit defined by a group transformation are mapped to the same abstract state by a group-structured MDP homomorphism. Optimal Value Equivalence and Lifted Policies h-equivalent state-action pairs share the same optimal Q-value and optimal value function [30]. Furthermore, there exists an abstract optimal Q-value Q̄∗ and abstract optimal value function V̄ ∗, such that Q∗(s, a) = Q̄∗(σ(s), αs(a)) and V ∗(s) = V̄ ∗(σ(s)). This is known as optimal value equivalence [30]. Policies can thus be optimized in the simpler abstract MDP. The optimal abstract policy π̄(ā|σ(s)) can then be pulled back to the original MDP using a procedure called lifting 2. The lifted policy is given in Equation 9. A lifted optimal abstract policy is also an optimal policy in the original MDP [30]. Note that while other lifted policies exist, we follow [30, 32] and choose the lifting that divides probability mass uniformly over the preimage: π↑(a|s) , π̄(ā|σ(s)) |{a ∈ α−1s (ā)}| , for any s ∈ S and a ∈ α−1s (ā). (9) 3 Method The focus of the next section is on the design of MDP homomorphic networks—policy networks and value networks obeying the MDP homomorphism. In the first section of the method, we show that any 2Note that we use the terminology lifting to stay consistent with [30]. policy network satisfying the MDP homomorphism property must be an equivariant neural network. In the second part of the method, we introduce a novel numerical technique for constructing groupequivariant networks, based on the transformation operators defining the equivalence state-action pairs under the MDP homomorphism. 3.1 Lifted Policies Are Invariant Lifted policies in symmetric MDPs with group-structured symmetries are invariant under the group of symmetries. Consider the following: Take an MDP with symmetries defined by transformation operators (Lg,Ksg) for g ∈ G. Now, if we take s′ = Lg[s] and a′ = Ksg [a] for any g ∈ G, (s′, a′) and (s, a) are h-equivalent under the corresponding MDP homomorphism h = (σ, {αs|s ∈ S}). So π↑(a|s) = π̄(αs(a)|σ(s)) |{a ∈ α−1s (ā)}| = π̄(αs′(a ′)|σ(s′)) |{a′ ∈ α−1s′ (ā)}| = π↑(a′|s′), (10) for all s ∈ S, a ∈ A and g ∈ G. In the first equality we have used the definition of the lifted policy. In the second equality, we have used the definition of h-equivalent state-action pairs, where σ(s) = σ(Lg(s)) and αs(a) = αs′(a′). In the third equality, we have reused the definition of the lifted policy. Thus we see that, written in this way, the lifted policy is invariant under state-action transformations (Lg,Ksg). This equation is very general and applies for all group-structured stateaction transformations. For a finite action space, this statement of invariance can be re-expressed as a statement of equivariance, by considering the vectorized policy. Invariant Policies On Finite Action Spaces Are Equivariant Vectorized Policies For convenience we introduce a vector of probabilities for each of the discrete actions under the policy π(s) , [π(a1|s), π(a2|s), ..., π(aN |s)]> , (11) where a1, ..., aN are the N possible discrete actions in action spaceA. The action transformation Ksg maps actions to actions invertibly. Thus applying an action transformation to the vectorized policy permutes the elements. We write the corresponding permutation matrix as Kg . Note that K−1g π(s) , [ π(Ksg [a1]|s), π(Ksg [a2]|s), ..., π(Ksg [aN ]|s) ]> , (12) where writing the inverse K−1g instead of Kg is required to maintain the property KgKh = Kgh. The invariance of the lifted policy can then be written as π↑(s) = K−1g π↑(Lg[s]), which can be rearranged to the equivariance equation Kgπ↑(s) = π↑(Lg[s]) for all g ∈ G, s ∈ S, a ∈ A. (13) This equation shows that the lifted policy must satisfy an equivariance constraint. In deep learning, this has already been well-explored in the context of supervised learning [11, 12, 46, 47, 43]. Next, we present a novel way to construct such networks. 3.2 Building MDP Homomorphic Networks Our goal is to build neural networks that follow Eq. 13; that is, we wish to find neural networks that are equivariant under a set of state and policy transformations. Equivariant networks are common in supervised learning [11, 12, 46, 47, 43, 41]. For instance, in semantic segmentation shifts and rotations of the input image result in shifts and rotations in the segmentation. A neural network consisting of only equivariant layers and non-linearities is equivariant as a whole, too3 [11]. Thus, once we know how to build a single equivariant layer, we can simply stack such layers together. Note that this is true regardless of the representation of the group, i.e. this works for spatial transformations of the input, feature map permutations in intermediate layers, and policy transformations in the output layer. For the experiments presented in this paper, we use the same group representations for the intermediate layers as for the output, i.e. permutations. For finite groups, such as cyclic groups or permutations, pointwise nonlinearities preserve equivariance [11]. In the past, learnable equivariant layers were designed by hand for each transformation group individually [11, 12, 46, 47, 44, 43, 41]. This is time-consuming and laborious. Here we present a novel way to build learnable linear layers that satisfy equivariance automatically. Equivariant Layers We begin with a single linear layer z′ = Wz + b, where W ∈ RDout×Din and b ∈ RDin is a bias. To simplify the math, we merge the bias into the weights so W 7→ [W,b] and z 7→ [z, 1]>. We denote the space of the augmented weights asWtotal. For a given pair of linear group transformation operators in matrix form (Lg,Kg), where Lg is the input transformation and Kg is the output transformation, we then have to solve the equation KgWz = WLgz, for all g ∈ G, z ∈ RDin+1. (14) Since this equation is true for all z we can in fact drop z entirely. Our task now is to find all weights W which satisfy Equation 14. We label this space of equivariant weights asW , defined as W , {W ∈ Wtotal | KgW = WLg, for all g ∈ G}, (15) again noting that we have dropped z. To find the spaceW notice that for each g ∈ G the constraint KgW = WLg is in fact linear in W. Thus, to findW we need to solve a set of linear equations in W. For this we introduce a construction, which we call a symmetrizer S(W). The symmetrizer is S(W) , 1 |G| ∑ g∈G K−1g WLg. (16) S has three important properties, of which proofs are provided in Appendix A. First, S(W) is symmetric (S(W) ∈ W). Second, S fixes any symmetric W: (W ∈ W =⇒ S(W) = W). These properties show that S projects arbitrary W ∈ Wtotal to the equivariant subspaceW . Since W is the solution set for a set of simultaneous linear equations, W is a linear subspace of the space of all possible weights Wtotal. Thus each W ∈ W can be parametrized as a linear combination of basis weights {Vi}ri=1, where r is the rank of the subspace and span({Vi}ri=1) = W . To find as basis for W, we take a Gram-Schmidt orthogonalization approach. We first sample weights in the total spaceWtotal and then project them into the equivariant subspace with the symmetrizer. We do this for multiple weight matrices, which we then stack and feed through a singular value decomposition to find a basis for the equivariant space. This procedure is outlined in Algorithm 1. Any equivariant layer can then be written as a linear combination of bases W = r∑ i=1 ciVi, (17) where the ci’s are learnable scalar coefficients, r is the rank of the equivariant space, and the matrices Vi are the basis vectors, formed from the reshaped right-singular vectors in the SVD. An example is shown in Figure 3. To run this procedure, all that is needed are the transformation operators Lg and Kg . Note we do not need to know the explicit transformation matrices, but just to be able to perform the mappings W 7→WLg and W 7→ K−1g W. For instance, some matrix Lg rotates an image patch, but we could equally implement WLg using a built-in rotation function. Code is available 4. 4 Experiments We evaluated three flavors of MDP homomorphic network—an MLP, a CNN, and an equivariant feature extractor—on three RL tasks that exhibit group symmetry: CartPole, a grid world, and Pong. 3See Appendix B for more details. 4https://github.com/ElisevanderPol/symmetrizer/ Algorithm 1 Equivariant layer construction 1: Sample N weight matrices W1,W2, ...,WN ∼ N (W; 0, I) for N ≥ dim(Wtotal) 2: Symmetrize samples: W̄i = S(Wi) for i = 1, ..., N 3: Vectorize samples and stack as W̄ = [vec(W̄1), vec(W̄2), ...] 4: Apply SVD: W̄ = UΣV> 5: Keep first r = rank(W̄) right-singular vectors (columns of V) and unvectorize to shape of Wi We use RLPYT [36] for the algorithms. Hyperparameters (and the range considered), architectures, and group implementation details are in the Supplementary Material. Code is available 5. 4.1 Environments For each environment we show S and A with respective representations of the group transformations. CartPole In the classic pole balancing task [3], we used a two-element group of reflections about the y-axis. We used OpenAI’s Cartpole-v1 [7] implementation, which has a 4-dimensional observation vector: (cart position x, pole angle θ, cart velocity ẋ, pole velocity θ̇). The (discrete) action space consists of applying a force left and right (←,→). We chose this example for its simple symmetries. Grid world We evaluated on a toroidal 7-by-7 predator-prey grid world with agent-centered coordinates. The prey and predator are randomly placed at the start of each episode, lasting a maximum of 100 time steps. The agent’s goal is to catch the prey, which takes a step in a random compass direction with probability 0.15 and stands still otherwise. Upon catching the prey, the agent receives a reward of +1, and -0.1 otherwise. The observation is a 21× 21 binary image identifying the position of the agent in the center and the prey in relative coordinates. See Figure 6a. This environment was chosen due to its four-fold rotational symmetry. Pong We evaluated on the RLPYT [36] implementation of Pong. In our experiments, the observation consisted of the 4 last observed frames, with upper and lower margins cut off and downscaled to an 80 × 80 grayscale image. In this setting, there is a flip symmetry over the horizontal axis: if we flip the observations, the up and down actions also flip. A curious artifact of Pong is that it has duplicate (up, down) actions, which means that to simplify matters, we mask out the policy values for the second pair of (up, down) actions. We chose Pong because of its higher dimensional state space. Finally, for Pong we additionally compare to two data augmentation baselines: stochastic data augmentation, where for each state, action pair we randomly transform them or not before feeding them to the network, and the second an equivariant version of [16] and similar to [35], where both state and transformed state are input to the network. The output of the transformed state is appropriately transformed, and both policies are averaged. 4.2 Models We implemented MDP homomorphic networks on top of two base architectures: MLP and CNN (exact architectures in Supplementary). We further experimented with an equivariant feature extractor, appended by a non-equivariant network, to isolate where equivariance made the greatest impact. Basis Networks We call networks whose weights are linear combinations of basis weights basis networks. As an ablation study on all equivariant networks, we sought to measure the effects of the basis training dynamics. We compared an equivariant basis against a pure nullspace basis, i.e. an explicitly non-symmetric basis using the right-null vectors from the equivariant layer construction, and a random basis, where we skip the symmetrization step in the layer construction and use the full rank basis. Unless stated otherwise, we reduce the number of ‘channels’ in the basis networks compared to the regular networks by dividing by the square root of the group size, ending up with a comparable number of trainable parameters. 5https://github.com/ElisevanderPol/mdp-homomorphic-networks 4.3 Results and Discussion We show training curves for CartPole in 4a-4b, Pong in Figure 4c and for the grid world in Figure 6. Across all experiments we observed that the MDP homomorphic network outperforms both the non-equivariant basis networks and the standard architectures, in terms of convergence speed. This confirms our motivations that building symmetry-preserving policy networks leads to faster convergence. Additionally, when compared to the data augmentation baselines in Figure 5, using equivariant networks is more beneficial. This is consistent with other results in the equivariance literature [4, 42, 44, 46]. While data augmentation can be used to create a larger dataset by exploiting symmetries, it does not directly lead to effective parameter sharing (as our approach does). Note, in Pong we only train the first 15 million frames to highlight the difference in the beginning; in constrast, a typical training duration is 50-200 million frames [25, 36]. For our ablation experiment, we wanted to control for the introduction of bases. It is not clear a priori that a network with a basis has the same gradient descent dynamics as an equivalent ‘basisless’ network. We compared equivariant, non-equivariant, and random bases, as mentioned above. We found the equivariant basis led to the fastest convergence. Figures 4a and 4c show that for CartPole and Pong the nullspace basis converged faster than the random basis. In the grid world there was no clear winner between the two. This is a curious result, requiring deeper investigation in a follow-up. For a third experiment, we investigated what happens if we sacrifice complete equivariance of the policy. This is attractive because it removes the need to find a transformation operator for a flattened output feature map. Instead, we only maintained an equivariant feature extractor, compared against a basic CNN feature extractor. The networks built on top of these extractors were MLPs. The results, in Figure 4c, are two-fold: 1) Basis feature extractors converge faster than standard CNNs, and 2) the equivariant feature extractor has fastest convergence. We hypothesize the equivariant feature extractor is fastest as it is easiest to learn an equivariant policy from equivariant features. We have additionally compared an equivariant feature extractor to a regular convolutional network on the Atari game Breakout, where the difference between the equivariant network and the regular network is much less pronounced. For details, see Appendix C. 5 Related Work Past work on MDP homomorphisms has often aimed at discovering the map itself based on knowledge of the transition and reward function, and under the assumption of enumerable state spaces [30, 31, 32, 38]. Other work relies on learning the map from sampled experience from the MDP [39, 6, 23]. Exactly computing symmetries in MDPs is graph isomorphism complete [27] even with full knowledge of the MDP dynamics. Rather than assuming knowledge of the transition and reward function, and small and enumerable state spaces, in this work we take the inverse view: we assume that we have an easily identifiable transformation of the joint state–action space and exploit this knowledge to learn more efficiently. Exploiting symmetries in deep RL has been previously explored in the game of Go, in the form of symmetric filter weights [33, 8] or data augmentation [35]. Other work on data augmentation increases sample efficiency and generalization on well-known benchmarks by augmenting existing data points state transformations such as random translations, cutout, color jitter and random convolutions [16, 9, 17, 19]. In contrast, we encode symmetries into the neural network weights, leading to more parameter sharing. Additionally, such data augmentation approaches tend to take the invariance view, augmenting existing data with state transformations that leave the state’s Q-values intact [16, 9, 17, 19] (the exception being [21] and [24], who augment trajectories rather than just states). Similarly, permutation invariant networks are commonly used in approaches to multi-agent RL [37, 22, 15]. We instead take the equivariance view, which accommodates a much larger class of symmetries that includes transformations on the action space. Abdolhosseini et al. [1] have previously manually constructed an equivariant network for a single group of symmetries in a single RL problem, namely reflections in a bipedal locomotion task. Our MDP homomorphic networks allow for automated construction of networks that are equivariant under arbitrary discrete groups and are therefore applicable to a wide variety of problems. From an equivariance point-of-view, the automatic construction of equivariant layers is new. [12] comes close to specifying a procedure, outlining the system of equations to solve, but does not specify an algorithm. The basic theory of group equivariant networks was outlined in [11, 12] and [10], with notable implementations to 2D roto-translations on grids [46, 43, 41] and 3D roto-translations on grids [45, 44, 42]. All of these works have relied on hand-constructed equivariant layers. 6 Conclusion This paper introduced MDP homomorphic networks, a family of deep architectures for reinforcement learning problems where symmetries have been identified. MDP homomorphic networks tie weights over symmetric state-action pairs. This weight-tying leads to fewer degrees-of-freedom and in our experiments we found that this translates into faster convergence. We used the established theory of MDP homomorphisms to motivate the use of equivariant networks, thus formalizing the connection between equivariant networks and symmetries in reinforcement learning. As an innovation, we also introduced the first method to automatically construct equivariant network layers, given a specification of the symmetries in question, thus removing a significant implementational obstacle. For future work, we want to further understand the symmetrizer and its effect on learning dynamics, as well as generalizing to problems that are not fully symmetric. 7 Acknowledgments and Funding Disclosure Elise van der Pol was funded by Robert Bosch GmbH. Daniel Worrall was funded by Philips. F.A.O. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758824 —INFLUENCE). Max Welling reports part-time employment at Qualcomm AI Research. 8 Broader Impact The goal of this paper is to make (deep) reinforcement learning techniques more efficient at solving Markov decision processes (MDPs) by making use of prior knowledge about symmetries. We do not expect the particular algorithm we develop to lead to immediate societal risks. However, Markov decision processes are very general, and can e.g. be used to model problems in autonomous driving, smart grids, and scheduling. Thus, solving such problems more efficiently can in the long run cause positive or negative societal impact. For example, making transportation or power grids more efficient, thereby making better use of scarce resources, would be a significantly positive impact. Other potential applications, such as in autonomous weapons, pose a societal risk [28]. Like many AI technologies, when used in automation, our technology can have a positive impact (increased productivity) and a negative impact (decreased demand) on labor markets. More immediately, control strategies learned using RL techniques are hard to verify and validate. Without proper precaution (e.g. [40]), employing such control strategies on physical systems thus run the risk of causing accidents involving people, e.g. due to reward misspecification, unsafe exploration, or distributional shift [2].
1. What is the primary contribution of the paper regarding neural network architectures? 2. What are the strengths of the proposed approach, particularly in terms of symmetry and efficiency? 3. What are the weaknesses of the paper, especially regarding the experiment section and the construction of full neural networks? 4. How does the reviewer assess the clarity and novelty of the proposed method for discovering bases of symmetrical weights? 5. What are the limitations of the paper, including the simplicity of the toy domains and the question of extending to more challenging problems?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a method of constructing neural network architectures which hardcode symmetries of MDPs (for example flipping cart pole left to right and also interchanging the actions leads to an equivalent environment) with the goal of learning more efficiently if such symmetries are known in advance. A main contribution is an automated method for finding a basis of matrices which obey a particular symmetry. Such bases are used to construct network layers which respect the symmetries of the problem as a weighted combination of basis components weighted by trainable parameters. Experiments demonstrate that on 3 RL toy problems using such an architecture speeds learning compared to conventional neural networks and randomly chosen bases. Strengths The basic idea is reasonable and clearly explained. The proposed method for discovering bases of symmetrical weights is interesting and as far as I know novel. The experiments seem reasonably well thought out and illustrate the merit of the proposed approach. Weaknesses The main paper only presents the construction of an equivariant linear layer leaving the construction of full neural networks which are used in the experiments to the appendix. Even in the appendix I found the explanation to be somewhat limited so I feel this should be clarified and expanded. Experiments are limited to simple toy domains, leaving open the question of extending to more difficult problems.
NIPS
Title MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning Abstract This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. N/A This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. 1 Introduction This paper considers learning decision-making systems that exploit symmetries in the structure of the world. Deep reinforcement learning (DRL) is concerned with learning neural function approximators for decision making strategies. While DRL algorithms have been shown to solve complex, highdimensional problems [35, 34, 26, 25], they are often used in problems with large state-action spaces, and thus require many samples before convergence. Many tasks exhibit symmetries, easily recognized by a designer of a reinforcement learning system. Consider the classic control task of balancing a pole on a cart. Balancing a pole that falls to the right requires an equivalent, but mirrored, strategy to one that falls to the left. See Figure 1. In this paper, we exploit knowledge of such symmetries in the state-action space of Markov decision processes (MDPs) to reduce the size of the solution space. We use the notion of MDP homomorphisms [32, 30] to formalize these symmetries. Intuitively, an MDP homomorphism is a map between MDPs, preserving the essential structure of the original MDP, while removing redundancies in the problem description, i.e., equivalent state-action pairs. The removal of these redundancies results in a smaller state-action space, upon which we may more easily build a policy. While earlier work has been concerned with discovering an MDP homomorphism for a given MDP [32, 30, 27, 31, 6, 39], we are instead concerned with how to construct deep policies, satisfying the MDP homomorphism. We call these models MDP homomorphic networks. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. MDP homomorphic networks use experience from one state-action pair to improve the policy for all ‘equivalent’ pairs. See Section 2.1 for a definition. They do this by tying the weights for two states if they are equivalent under a transformation chosen by the designer, such as s and L[s] in Figure 1. Such weight-tying follows a similar principle to the use of convolutional networks [18], which are equivariant to translations of the input [11]. In particular, when equivalent state-action pairs can be related by an invertible transformation, which we refer to as group-structured, we show that the policy network belongs to the class of group-equivariant neural networks [11, 46].Equivariant neural networks are a class of neural network, which have built-in symmetries [11, 12, 46, 43, 41]. They are a generalization of convolutional neural networks—which exhibit translation symmetry—to transformation groups (group-structured equivariance) and transformation semigroups [47] (semigroup-structured equivariance). They have been shown to reduce sample complexity for classification tasks [46, 44] and also to be universal approximators of symmetric functions1 [48]. We borrow from the literature on group equivariant networks to design policies that tie weights for state-action pairs given their equivalence classes, with the goal of reducing the number of samples needed to find good policies. Furthermore, we can use the MDP homomorphism property to design not just policy networks, but also value networks and even environment models. MDP homomorphic networks are agnostic to the type of model-free DRL algorithm, as long as an appropriate transformation on the output is given. In this paper we focus on equivariant policy and invariant value networks. See Figure 1 for an example policy. An additional contribution of this paper is a novel numerical way of finding equivariant layers for arbitrary transformation groups. The design of equivariant networks imposes a system of linear constraint equations on the linear/convolutional layers [12, 11, 46, 43]. Solving these equations has typically been done analytically by hand, which is a time-consuming and intricate process, barring rapid prototyping. Rather than requiring analytical derivation, our method only requires that the system designer specify input and output transformation groups of the form {state transformation, policy transformation}. We provide Pytorch [29] implementations of our equivariant network layers, and implementations of the transformations used in this paper. We also experimentally demonstrate that exploiting equivalences in MDPs leads to faster learning of policies for DRL. Our contributions are two-fold: • We draw a connection between MDP homomorphisms and group equivariant networks, proposing MDP homomorphic networks to exploit symmetries in decision-making problems; • We introduce a numerical algorithm for the automated construction of equivariant layers. 2 Background Here we outline the basics of the theory behind MDP homomorphisms and equivariance. We begin with a brief outline of the concepts of equivalence, invariance, and equivariance, followed by a review of the Markov decision process (MDP). We then review the MDP homomorphism, which builds a map between ‘equivalent’ MDPs. 2.1 Equivalence, Invariance, and Equivariance Equivalence If a function f : X → Y maps two inputs x, x′ ∈ X to the same value, that is f(x) = f(x′), then we say that x and x′ are f -equivalent. For instance, two states s, s′ leading to the 1Specifically group equivariant networks are universal approximators to functions symmetric under linear representations of compact groups. same optimal value V ∗(s) = V ∗(s′) would be V ∗-equivalent or optimal value equivalent [30]. An example of two optimal value equivalent states would be states s and L[s] in the CartPole example of Figure 1. The set of all points f -equivalent to x is called the equivalence class of x. Invariance and Symmetries Typically there exist very intuitive relationships between the points in an equivalence class. In the CartPole example of Figure 1 this relationship is a horizontal flip about the vertical axis. This is formalized with the transformation operator Lg : X → X , where g ∈ G and G is a mathematical group. If Lg satisfies f(x) = f(Lg[x]), for all g ∈ G, x ∈ X , (1) then we say that f is invariant or symmetric to Lg and that {Lg}g∈G is a set of symmetries of f . We can see that for the invariance equation to be satisfied, it must be that Lg can only map x to points in its equivalence class. Note that in abstract algebra for Lg to be a true transformation operator, G must contain an identity operation; that is Lg[x] = x for some g and all x. An interesting property of transformation operators which leave f invariant, is that they can be composed and still leave f invariant, so Lg ◦ Lh is also a symmetry of f for all g, h ∈ G. In abstract algebra, this property is known as a semigroup property. If Lg is always invertible, this is called a group property. In this work, we experiment with group-structured transformation operators. For more information, see [14]. One extra helpful concept is that of orbits. If f is invariant to Lg , then it is invariant along the orbits of G. The orbit Ox of point x is the set of points reachable from x via transformation operator Lg: Ox , {Lg[x] ∈ X |g ∈ G}. (2) Equivariance A related notion to invariance is equivariance. Given a transformation operator Lg : X → X and a mapping f : X → Y , we say that f is equivariant [11, 46] to the transformation if there exists a second transformation operator Kg : Y → Y in the output space of f such that Kg[f(x)] = f(Lg[x]), for all g ∈ G, x ∈ X . (3) The operators Lg and Kg can be seen to describe the same transformation, but in different spaces. In fact, an equivariant map can be seen to map orbits to orbits. We also see that invariance is a special case of equivariance, if we set Kg to the identity operator for all g. Given Lg and Kg, we can solve for the collection of equivariant functions f satisfying the equivariance constraint. Moreover, for linear transformation operators and linear f a rich theory already exists in which f is referred to as an intertwiner [12]. In the equivariant deep learning literature, neural networks are built from interleaving intertwiners and equivariant nonlinearities. As far as we are aware, most of these methods are hand-designed per pair of transformation operators, with the exception of [13]. In this paper, we introduce a computational method to solve for intertwiners given a pair of transformation operators. 2.2 Markov Decision Processes A Markov decision process (MDP) is a tuple (S,A, R, T, γ), with state space S, action space A, immediate reward function R : S × A → R, transition function T : S × A × S → R≥0, and discount factor γ ∈ [0, 1]. The goal of solving an MDP is to find a policy π ∈ Π, π : S ×A → R≥0 (written π(a|s)), where π normalizes to unity over the action space, that maximizes the expected return Rt = Eπ[ ∑T k=0 γ krt+k+1]. The expected return from a state s under a policy π is given by the value function V π. A related object is the Q-value Qπ, the expected return from a state s after taking action a under π. V π and Qπ are governed by the well-known Bellman equations [5] (see Supplementary). In an MDP, optimal policies π∗ attain an optimal value V ∗ and corresponding Q-value given by V ∗(s) = max π∈Π V π(s) and Q∗(s) = max π∈Π Qπ(s). MDP with Symmetries Symmetries can appear in MDPs. For instance, in Figure 2 CartPole has a reflection symmetry about the vertical axis. Here we define an MDP with symmetries. In an MDP with symmetries there is a set of transformations on the state-action space, which leaves the reward function and transition operator invariant. We define a state transformation and a state-dependent action transformation as Lg : S → S and Ksg : A → A respectively. Invariance of the reward function and transition function is then characterized as R(s, a) = R(Lg[s],K s g [a]) for all g ∈ G, s ∈ S, a ∈ A (4) T (s′|s, a) = T (Lg[s′]|Lg[s],Ksg [a]) for all g ∈ G, s ∈ S, a ∈ A. (5) Written like this, we see that in an MDP with symmetries the reward function and transition operator are invariant along orbits defined by the transformations (Lg,Ksg). MDP Homomorphisms MDPs with symmetries are closely related to MDP homomorphisms, as we explain below. First we define the latter. An MDP homomorphism h [32, 30] is a mapping from one MDPM = (S,A, R, T, γ) to another M̄ = (S̄, Ā, R̄, T̄ , γ) defined by a surjective map from the state-action space S ×A to an abstract state-action space S̄ × Ā. In particular, h consists of a tuple of surjective maps (σ, {αs|s ∈ S}), where we have the state map σ : S → S̄ and the state-dependent action map αs : A → Ā. These maps are built to satisfy the following conditions R̄(σ(s), αs(a)) , R(s, a) for all s ∈ S, a ∈ A, (6) T̄ (σ(s′)|σ(s), αs(a)) , ∑ s′′∈σ−1(s′) T (s′′|s, a) for all s, s′ ∈ S, a ∈ A. (7) An exact MDP homomorphism provides a model equivalent abstraction [20]. Given an MDP homomorphism h, two state-action pairs (s, a) and (s′, a′) are called h-equivalent if σ(s) = σ(s′) and αs(a) = αs′(a′). Symmetries and MDP homomorphisms are connected in a natural way: If an MDP has symmetries Lg and Kg, the above equations (4) and (5) hold. This means that we can define a corresponding MDP homomorphism, which we define next. Group-structured MDP Homomorphisms Specifically, for an MDP with symmetries, we can define an abstract state-action space, by mapping (s, a) pairs to (a representative point of) their equivalence class (σ(s), αs(a)). That is, state-action pairs and their transformed version are mapped to the same abstract state in the reduced MDP: (σ(s), αs(a)) = ( σ(Lg[s]), αLg [s](K s g [a]) ) ∀g ∈ G, s ∈ S, a ∈ A (8) In this case, we call the resulting MDP homomorphism group structured. In other words, all the state-action pairs in an orbit defined by a group transformation are mapped to the same abstract state by a group-structured MDP homomorphism. Optimal Value Equivalence and Lifted Policies h-equivalent state-action pairs share the same optimal Q-value and optimal value function [30]. Furthermore, there exists an abstract optimal Q-value Q̄∗ and abstract optimal value function V̄ ∗, such that Q∗(s, a) = Q̄∗(σ(s), αs(a)) and V ∗(s) = V̄ ∗(σ(s)). This is known as optimal value equivalence [30]. Policies can thus be optimized in the simpler abstract MDP. The optimal abstract policy π̄(ā|σ(s)) can then be pulled back to the original MDP using a procedure called lifting 2. The lifted policy is given in Equation 9. A lifted optimal abstract policy is also an optimal policy in the original MDP [30]. Note that while other lifted policies exist, we follow [30, 32] and choose the lifting that divides probability mass uniformly over the preimage: π↑(a|s) , π̄(ā|σ(s)) |{a ∈ α−1s (ā)}| , for any s ∈ S and a ∈ α−1s (ā). (9) 3 Method The focus of the next section is on the design of MDP homomorphic networks—policy networks and value networks obeying the MDP homomorphism. In the first section of the method, we show that any 2Note that we use the terminology lifting to stay consistent with [30]. policy network satisfying the MDP homomorphism property must be an equivariant neural network. In the second part of the method, we introduce a novel numerical technique for constructing groupequivariant networks, based on the transformation operators defining the equivalence state-action pairs under the MDP homomorphism. 3.1 Lifted Policies Are Invariant Lifted policies in symmetric MDPs with group-structured symmetries are invariant under the group of symmetries. Consider the following: Take an MDP with symmetries defined by transformation operators (Lg,Ksg) for g ∈ G. Now, if we take s′ = Lg[s] and a′ = Ksg [a] for any g ∈ G, (s′, a′) and (s, a) are h-equivalent under the corresponding MDP homomorphism h = (σ, {αs|s ∈ S}). So π↑(a|s) = π̄(αs(a)|σ(s)) |{a ∈ α−1s (ā)}| = π̄(αs′(a ′)|σ(s′)) |{a′ ∈ α−1s′ (ā)}| = π↑(a′|s′), (10) for all s ∈ S, a ∈ A and g ∈ G. In the first equality we have used the definition of the lifted policy. In the second equality, we have used the definition of h-equivalent state-action pairs, where σ(s) = σ(Lg(s)) and αs(a) = αs′(a′). In the third equality, we have reused the definition of the lifted policy. Thus we see that, written in this way, the lifted policy is invariant under state-action transformations (Lg,Ksg). This equation is very general and applies for all group-structured stateaction transformations. For a finite action space, this statement of invariance can be re-expressed as a statement of equivariance, by considering the vectorized policy. Invariant Policies On Finite Action Spaces Are Equivariant Vectorized Policies For convenience we introduce a vector of probabilities for each of the discrete actions under the policy π(s) , [π(a1|s), π(a2|s), ..., π(aN |s)]> , (11) where a1, ..., aN are the N possible discrete actions in action spaceA. The action transformation Ksg maps actions to actions invertibly. Thus applying an action transformation to the vectorized policy permutes the elements. We write the corresponding permutation matrix as Kg . Note that K−1g π(s) , [ π(Ksg [a1]|s), π(Ksg [a2]|s), ..., π(Ksg [aN ]|s) ]> , (12) where writing the inverse K−1g instead of Kg is required to maintain the property KgKh = Kgh. The invariance of the lifted policy can then be written as π↑(s) = K−1g π↑(Lg[s]), which can be rearranged to the equivariance equation Kgπ↑(s) = π↑(Lg[s]) for all g ∈ G, s ∈ S, a ∈ A. (13) This equation shows that the lifted policy must satisfy an equivariance constraint. In deep learning, this has already been well-explored in the context of supervised learning [11, 12, 46, 47, 43]. Next, we present a novel way to construct such networks. 3.2 Building MDP Homomorphic Networks Our goal is to build neural networks that follow Eq. 13; that is, we wish to find neural networks that are equivariant under a set of state and policy transformations. Equivariant networks are common in supervised learning [11, 12, 46, 47, 43, 41]. For instance, in semantic segmentation shifts and rotations of the input image result in shifts and rotations in the segmentation. A neural network consisting of only equivariant layers and non-linearities is equivariant as a whole, too3 [11]. Thus, once we know how to build a single equivariant layer, we can simply stack such layers together. Note that this is true regardless of the representation of the group, i.e. this works for spatial transformations of the input, feature map permutations in intermediate layers, and policy transformations in the output layer. For the experiments presented in this paper, we use the same group representations for the intermediate layers as for the output, i.e. permutations. For finite groups, such as cyclic groups or permutations, pointwise nonlinearities preserve equivariance [11]. In the past, learnable equivariant layers were designed by hand for each transformation group individually [11, 12, 46, 47, 44, 43, 41]. This is time-consuming and laborious. Here we present a novel way to build learnable linear layers that satisfy equivariance automatically. Equivariant Layers We begin with a single linear layer z′ = Wz + b, where W ∈ RDout×Din and b ∈ RDin is a bias. To simplify the math, we merge the bias into the weights so W 7→ [W,b] and z 7→ [z, 1]>. We denote the space of the augmented weights asWtotal. For a given pair of linear group transformation operators in matrix form (Lg,Kg), where Lg is the input transformation and Kg is the output transformation, we then have to solve the equation KgWz = WLgz, for all g ∈ G, z ∈ RDin+1. (14) Since this equation is true for all z we can in fact drop z entirely. Our task now is to find all weights W which satisfy Equation 14. We label this space of equivariant weights asW , defined as W , {W ∈ Wtotal | KgW = WLg, for all g ∈ G}, (15) again noting that we have dropped z. To find the spaceW notice that for each g ∈ G the constraint KgW = WLg is in fact linear in W. Thus, to findW we need to solve a set of linear equations in W. For this we introduce a construction, which we call a symmetrizer S(W). The symmetrizer is S(W) , 1 |G| ∑ g∈G K−1g WLg. (16) S has three important properties, of which proofs are provided in Appendix A. First, S(W) is symmetric (S(W) ∈ W). Second, S fixes any symmetric W: (W ∈ W =⇒ S(W) = W). These properties show that S projects arbitrary W ∈ Wtotal to the equivariant subspaceW . Since W is the solution set for a set of simultaneous linear equations, W is a linear subspace of the space of all possible weights Wtotal. Thus each W ∈ W can be parametrized as a linear combination of basis weights {Vi}ri=1, where r is the rank of the subspace and span({Vi}ri=1) = W . To find as basis for W, we take a Gram-Schmidt orthogonalization approach. We first sample weights in the total spaceWtotal and then project them into the equivariant subspace with the symmetrizer. We do this for multiple weight matrices, which we then stack and feed through a singular value decomposition to find a basis for the equivariant space. This procedure is outlined in Algorithm 1. Any equivariant layer can then be written as a linear combination of bases W = r∑ i=1 ciVi, (17) where the ci’s are learnable scalar coefficients, r is the rank of the equivariant space, and the matrices Vi are the basis vectors, formed from the reshaped right-singular vectors in the SVD. An example is shown in Figure 3. To run this procedure, all that is needed are the transformation operators Lg and Kg . Note we do not need to know the explicit transformation matrices, but just to be able to perform the mappings W 7→WLg and W 7→ K−1g W. For instance, some matrix Lg rotates an image patch, but we could equally implement WLg using a built-in rotation function. Code is available 4. 4 Experiments We evaluated three flavors of MDP homomorphic network—an MLP, a CNN, and an equivariant feature extractor—on three RL tasks that exhibit group symmetry: CartPole, a grid world, and Pong. 3See Appendix B for more details. 4https://github.com/ElisevanderPol/symmetrizer/ Algorithm 1 Equivariant layer construction 1: Sample N weight matrices W1,W2, ...,WN ∼ N (W; 0, I) for N ≥ dim(Wtotal) 2: Symmetrize samples: W̄i = S(Wi) for i = 1, ..., N 3: Vectorize samples and stack as W̄ = [vec(W̄1), vec(W̄2), ...] 4: Apply SVD: W̄ = UΣV> 5: Keep first r = rank(W̄) right-singular vectors (columns of V) and unvectorize to shape of Wi We use RLPYT [36] for the algorithms. Hyperparameters (and the range considered), architectures, and group implementation details are in the Supplementary Material. Code is available 5. 4.1 Environments For each environment we show S and A with respective representations of the group transformations. CartPole In the classic pole balancing task [3], we used a two-element group of reflections about the y-axis. We used OpenAI’s Cartpole-v1 [7] implementation, which has a 4-dimensional observation vector: (cart position x, pole angle θ, cart velocity ẋ, pole velocity θ̇). The (discrete) action space consists of applying a force left and right (←,→). We chose this example for its simple symmetries. Grid world We evaluated on a toroidal 7-by-7 predator-prey grid world with agent-centered coordinates. The prey and predator are randomly placed at the start of each episode, lasting a maximum of 100 time steps. The agent’s goal is to catch the prey, which takes a step in a random compass direction with probability 0.15 and stands still otherwise. Upon catching the prey, the agent receives a reward of +1, and -0.1 otherwise. The observation is a 21× 21 binary image identifying the position of the agent in the center and the prey in relative coordinates. See Figure 6a. This environment was chosen due to its four-fold rotational symmetry. Pong We evaluated on the RLPYT [36] implementation of Pong. In our experiments, the observation consisted of the 4 last observed frames, with upper and lower margins cut off and downscaled to an 80 × 80 grayscale image. In this setting, there is a flip symmetry over the horizontal axis: if we flip the observations, the up and down actions also flip. A curious artifact of Pong is that it has duplicate (up, down) actions, which means that to simplify matters, we mask out the policy values for the second pair of (up, down) actions. We chose Pong because of its higher dimensional state space. Finally, for Pong we additionally compare to two data augmentation baselines: stochastic data augmentation, where for each state, action pair we randomly transform them or not before feeding them to the network, and the second an equivariant version of [16] and similar to [35], where both state and transformed state are input to the network. The output of the transformed state is appropriately transformed, and both policies are averaged. 4.2 Models We implemented MDP homomorphic networks on top of two base architectures: MLP and CNN (exact architectures in Supplementary). We further experimented with an equivariant feature extractor, appended by a non-equivariant network, to isolate where equivariance made the greatest impact. Basis Networks We call networks whose weights are linear combinations of basis weights basis networks. As an ablation study on all equivariant networks, we sought to measure the effects of the basis training dynamics. We compared an equivariant basis against a pure nullspace basis, i.e. an explicitly non-symmetric basis using the right-null vectors from the equivariant layer construction, and a random basis, where we skip the symmetrization step in the layer construction and use the full rank basis. Unless stated otherwise, we reduce the number of ‘channels’ in the basis networks compared to the regular networks by dividing by the square root of the group size, ending up with a comparable number of trainable parameters. 5https://github.com/ElisevanderPol/mdp-homomorphic-networks 4.3 Results and Discussion We show training curves for CartPole in 4a-4b, Pong in Figure 4c and for the grid world in Figure 6. Across all experiments we observed that the MDP homomorphic network outperforms both the non-equivariant basis networks and the standard architectures, in terms of convergence speed. This confirms our motivations that building symmetry-preserving policy networks leads to faster convergence. Additionally, when compared to the data augmentation baselines in Figure 5, using equivariant networks is more beneficial. This is consistent with other results in the equivariance literature [4, 42, 44, 46]. While data augmentation can be used to create a larger dataset by exploiting symmetries, it does not directly lead to effective parameter sharing (as our approach does). Note, in Pong we only train the first 15 million frames to highlight the difference in the beginning; in constrast, a typical training duration is 50-200 million frames [25, 36]. For our ablation experiment, we wanted to control for the introduction of bases. It is not clear a priori that a network with a basis has the same gradient descent dynamics as an equivalent ‘basisless’ network. We compared equivariant, non-equivariant, and random bases, as mentioned above. We found the equivariant basis led to the fastest convergence. Figures 4a and 4c show that for CartPole and Pong the nullspace basis converged faster than the random basis. In the grid world there was no clear winner between the two. This is a curious result, requiring deeper investigation in a follow-up. For a third experiment, we investigated what happens if we sacrifice complete equivariance of the policy. This is attractive because it removes the need to find a transformation operator for a flattened output feature map. Instead, we only maintained an equivariant feature extractor, compared against a basic CNN feature extractor. The networks built on top of these extractors were MLPs. The results, in Figure 4c, are two-fold: 1) Basis feature extractors converge faster than standard CNNs, and 2) the equivariant feature extractor has fastest convergence. We hypothesize the equivariant feature extractor is fastest as it is easiest to learn an equivariant policy from equivariant features. We have additionally compared an equivariant feature extractor to a regular convolutional network on the Atari game Breakout, where the difference between the equivariant network and the regular network is much less pronounced. For details, see Appendix C. 5 Related Work Past work on MDP homomorphisms has often aimed at discovering the map itself based on knowledge of the transition and reward function, and under the assumption of enumerable state spaces [30, 31, 32, 38]. Other work relies on learning the map from sampled experience from the MDP [39, 6, 23]. Exactly computing symmetries in MDPs is graph isomorphism complete [27] even with full knowledge of the MDP dynamics. Rather than assuming knowledge of the transition and reward function, and small and enumerable state spaces, in this work we take the inverse view: we assume that we have an easily identifiable transformation of the joint state–action space and exploit this knowledge to learn more efficiently. Exploiting symmetries in deep RL has been previously explored in the game of Go, in the form of symmetric filter weights [33, 8] or data augmentation [35]. Other work on data augmentation increases sample efficiency and generalization on well-known benchmarks by augmenting existing data points state transformations such as random translations, cutout, color jitter and random convolutions [16, 9, 17, 19]. In contrast, we encode symmetries into the neural network weights, leading to more parameter sharing. Additionally, such data augmentation approaches tend to take the invariance view, augmenting existing data with state transformations that leave the state’s Q-values intact [16, 9, 17, 19] (the exception being [21] and [24], who augment trajectories rather than just states). Similarly, permutation invariant networks are commonly used in approaches to multi-agent RL [37, 22, 15]. We instead take the equivariance view, which accommodates a much larger class of symmetries that includes transformations on the action space. Abdolhosseini et al. [1] have previously manually constructed an equivariant network for a single group of symmetries in a single RL problem, namely reflections in a bipedal locomotion task. Our MDP homomorphic networks allow for automated construction of networks that are equivariant under arbitrary discrete groups and are therefore applicable to a wide variety of problems. From an equivariance point-of-view, the automatic construction of equivariant layers is new. [12] comes close to specifying a procedure, outlining the system of equations to solve, but does not specify an algorithm. The basic theory of group equivariant networks was outlined in [11, 12] and [10], with notable implementations to 2D roto-translations on grids [46, 43, 41] and 3D roto-translations on grids [45, 44, 42]. All of these works have relied on hand-constructed equivariant layers. 6 Conclusion This paper introduced MDP homomorphic networks, a family of deep architectures for reinforcement learning problems where symmetries have been identified. MDP homomorphic networks tie weights over symmetric state-action pairs. This weight-tying leads to fewer degrees-of-freedom and in our experiments we found that this translates into faster convergence. We used the established theory of MDP homomorphisms to motivate the use of equivariant networks, thus formalizing the connection between equivariant networks and symmetries in reinforcement learning. As an innovation, we also introduced the first method to automatically construct equivariant network layers, given a specification of the symmetries in question, thus removing a significant implementational obstacle. For future work, we want to further understand the symmetrizer and its effect on learning dynamics, as well as generalizing to problems that are not fully symmetric. 7 Acknowledgments and Funding Disclosure Elise van der Pol was funded by Robert Bosch GmbH. Daniel Worrall was funded by Philips. F.A.O. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758824 —INFLUENCE). Max Welling reports part-time employment at Qualcomm AI Research. 8 Broader Impact The goal of this paper is to make (deep) reinforcement learning techniques more efficient at solving Markov decision processes (MDPs) by making use of prior knowledge about symmetries. We do not expect the particular algorithm we develop to lead to immediate societal risks. However, Markov decision processes are very general, and can e.g. be used to model problems in autonomous driving, smart grids, and scheduling. Thus, solving such problems more efficiently can in the long run cause positive or negative societal impact. For example, making transportation or power grids more efficient, thereby making better use of scarce resources, would be a significantly positive impact. Other potential applications, such as in autonomous weapons, pose a societal risk [28]. Like many AI technologies, when used in automation, our technology can have a positive impact (increased productivity) and a negative impact (decreased demand) on labor markets. More immediately, control strategies learned using RL techniques are hard to verify and validate. Without proper precaution (e.g. [40]), employing such control strategies on physical systems thus run the risk of causing accidents involving people, e.g. due to reward misspecification, unsafe exploration, or distributional shift [2].
1. What is the main contribution of the paper regarding MDP homomorphism? 2. What are the strengths of the proposed approach, particularly in its formalization and scalability? 3. What are the weaknesses of the paper, especially regarding its empirical evaluation and comparison with other methods? 4. How does the reviewer assess the significance of equivariant networks compared to other methods such as data augmentation? 5. What are some concerns regarding the complexity and scalability of the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper investigates MDP homomorphism but instead of assuming knowledge of the transition and reward function, the paper assumes the existence of an easily identifiable transformation of the state-action space (e.g., symmetries in an image). The paper then introduces the concept of group equivariant networks, networks that are equivariant under a set of state and policy transformations. Finally, the paper also introduces a numerical algorithm for the automated construction of equivariant layers and it demonstrates its applicability in two small domains (cartpole and a gridworld) and one pixel-based environment (the Atari 2600 game Pong). Strengths It formalizes what is now becoming a standard practice in the deep RL community: data augmentation. It is always useful to see a principled presentation of a concept that is becoming so pervasive in the field. Importantly, this paper is really well-written despite covering non-trivial mathematical concepts. It is also great that the paper goes beyond the formalization and actually proposes a new type of network that explicitly captures these homomorphisms. An important point is that the paper also shows the scalability of the proposed approach in a pixel-based domain. Weaknesses My main criticism about this paper is with respect to its empirical evaluation. I don't think the paper provides enough evidence that equivariant networks are better than maybe simpler options to capture invariances/symmetries. Specifically, if I'm knowledgeable of the transformations that lead to invariance (or equivariance), should I use equivariant networks instead of data augmentation? I'd be curious to see how a "regular" network, fed with the different transformations of the input, would perform when compared to equivariant networks. Is this an experiment that was run and I missed it? It seems to me the paper focuses too much on the constraints induced by the networks but not so much on how to leverage such an information with regular networks. Simply feeding a standard agent more frames, from transformations, would be a meaningful baseline. Others that come to mind include cutout [Cobbe et al., 2019], random convolutions [Lee et al., 2020], random shifts [Kostrikov et al., 2020], random crop, and color jitter [Laskin et al., 2020]. Such an experiment would be authoritative evidence that equivariant networks are a good approach to deal with problems in which we know its symmetries. If it is on par with these methods it might not be that interesting since it is definitely more complex. I don't expect the authors to compare the proposed solution to all methods listed above, some of them are only on arXiv and are quite recent, but I thought a more comprehensive list would be more useful. Complete references are below. Finally, the new network requires some matrix inversions. This is a particularly expensive operation. Obviously, the paper has results on an Atari 2600 game, showing that the proposed approach scales up to this setting. It would be interesting to see a longer discussion about the scalability of the proposed idea though. References: I. Kostrikov, D. Yarats, and R. Fergus, “Image augmentation is all you need: Regularizing deep reinforcement learning from pixels,” CoRR, vol. abs/2004.13649, 2020. K. Cobbe, O. Klimov, C. Hesse, T. Kim, and J. Schulman, “Quantifying generalization in reinforcement learning,” in Proceedings of the International Conference on Machine Learning (ICML), 2019. M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, “Reinforcement learning with augmented data,” CoRR, vol. abs/2004.14990, 2020. K. Lee, K. Lee, J. Shin, and H. Lee, “Network randomization: A simple technique for generalization in deep reinforcement learning,” in The International Conference on Learning Representations (ICLR), 2020.
NIPS
Title MDP Homomorphic Networks: Group Symmetries in Reinforcement Learning Abstract This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. N/A This paper introduces MDP homomorphic networks for deep reinforcement learning. MDP homomorphic networks are neural networks that are equivariant under symmetries in the joint state-action space of an MDP. Current approaches to deep reinforcement learning do not usually exploit knowledge about such structure. By building this prior knowledge into policy and value networks using an equivariance constraint, we can reduce the size of the solution space. We specifically focus on group-structured symmetries (invertible transformations). Additionally, we introduce an easy method for constructing equivariant network layers numerically, so the system designer need not solve the constraints by hand, as is typically done. We construct MDP homomorphic MLPs and CNNs that are equivariant under either a group of reflections or rotations. We show that such networks converge faster than unstructured baselines on CartPole, a grid world and Pong. 1 Introduction This paper considers learning decision-making systems that exploit symmetries in the structure of the world. Deep reinforcement learning (DRL) is concerned with learning neural function approximators for decision making strategies. While DRL algorithms have been shown to solve complex, highdimensional problems [35, 34, 26, 25], they are often used in problems with large state-action spaces, and thus require many samples before convergence. Many tasks exhibit symmetries, easily recognized by a designer of a reinforcement learning system. Consider the classic control task of balancing a pole on a cart. Balancing a pole that falls to the right requires an equivalent, but mirrored, strategy to one that falls to the left. See Figure 1. In this paper, we exploit knowledge of such symmetries in the state-action space of Markov decision processes (MDPs) to reduce the size of the solution space. We use the notion of MDP homomorphisms [32, 30] to formalize these symmetries. Intuitively, an MDP homomorphism is a map between MDPs, preserving the essential structure of the original MDP, while removing redundancies in the problem description, i.e., equivalent state-action pairs. The removal of these redundancies results in a smaller state-action space, upon which we may more easily build a policy. While earlier work has been concerned with discovering an MDP homomorphism for a given MDP [32, 30, 27, 31, 6, 39], we are instead concerned with how to construct deep policies, satisfying the MDP homomorphism. We call these models MDP homomorphic networks. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. MDP homomorphic networks use experience from one state-action pair to improve the policy for all ‘equivalent’ pairs. See Section 2.1 for a definition. They do this by tying the weights for two states if they are equivalent under a transformation chosen by the designer, such as s and L[s] in Figure 1. Such weight-tying follows a similar principle to the use of convolutional networks [18], which are equivariant to translations of the input [11]. In particular, when equivalent state-action pairs can be related by an invertible transformation, which we refer to as group-structured, we show that the policy network belongs to the class of group-equivariant neural networks [11, 46].Equivariant neural networks are a class of neural network, which have built-in symmetries [11, 12, 46, 43, 41]. They are a generalization of convolutional neural networks—which exhibit translation symmetry—to transformation groups (group-structured equivariance) and transformation semigroups [47] (semigroup-structured equivariance). They have been shown to reduce sample complexity for classification tasks [46, 44] and also to be universal approximators of symmetric functions1 [48]. We borrow from the literature on group equivariant networks to design policies that tie weights for state-action pairs given their equivalence classes, with the goal of reducing the number of samples needed to find good policies. Furthermore, we can use the MDP homomorphism property to design not just policy networks, but also value networks and even environment models. MDP homomorphic networks are agnostic to the type of model-free DRL algorithm, as long as an appropriate transformation on the output is given. In this paper we focus on equivariant policy and invariant value networks. See Figure 1 for an example policy. An additional contribution of this paper is a novel numerical way of finding equivariant layers for arbitrary transformation groups. The design of equivariant networks imposes a system of linear constraint equations on the linear/convolutional layers [12, 11, 46, 43]. Solving these equations has typically been done analytically by hand, which is a time-consuming and intricate process, barring rapid prototyping. Rather than requiring analytical derivation, our method only requires that the system designer specify input and output transformation groups of the form {state transformation, policy transformation}. We provide Pytorch [29] implementations of our equivariant network layers, and implementations of the transformations used in this paper. We also experimentally demonstrate that exploiting equivalences in MDPs leads to faster learning of policies for DRL. Our contributions are two-fold: • We draw a connection between MDP homomorphisms and group equivariant networks, proposing MDP homomorphic networks to exploit symmetries in decision-making problems; • We introduce a numerical algorithm for the automated construction of equivariant layers. 2 Background Here we outline the basics of the theory behind MDP homomorphisms and equivariance. We begin with a brief outline of the concepts of equivalence, invariance, and equivariance, followed by a review of the Markov decision process (MDP). We then review the MDP homomorphism, which builds a map between ‘equivalent’ MDPs. 2.1 Equivalence, Invariance, and Equivariance Equivalence If a function f : X → Y maps two inputs x, x′ ∈ X to the same value, that is f(x) = f(x′), then we say that x and x′ are f -equivalent. For instance, two states s, s′ leading to the 1Specifically group equivariant networks are universal approximators to functions symmetric under linear representations of compact groups. same optimal value V ∗(s) = V ∗(s′) would be V ∗-equivalent or optimal value equivalent [30]. An example of two optimal value equivalent states would be states s and L[s] in the CartPole example of Figure 1. The set of all points f -equivalent to x is called the equivalence class of x. Invariance and Symmetries Typically there exist very intuitive relationships between the points in an equivalence class. In the CartPole example of Figure 1 this relationship is a horizontal flip about the vertical axis. This is formalized with the transformation operator Lg : X → X , where g ∈ G and G is a mathematical group. If Lg satisfies f(x) = f(Lg[x]), for all g ∈ G, x ∈ X , (1) then we say that f is invariant or symmetric to Lg and that {Lg}g∈G is a set of symmetries of f . We can see that for the invariance equation to be satisfied, it must be that Lg can only map x to points in its equivalence class. Note that in abstract algebra for Lg to be a true transformation operator, G must contain an identity operation; that is Lg[x] = x for some g and all x. An interesting property of transformation operators which leave f invariant, is that they can be composed and still leave f invariant, so Lg ◦ Lh is also a symmetry of f for all g, h ∈ G. In abstract algebra, this property is known as a semigroup property. If Lg is always invertible, this is called a group property. In this work, we experiment with group-structured transformation operators. For more information, see [14]. One extra helpful concept is that of orbits. If f is invariant to Lg , then it is invariant along the orbits of G. The orbit Ox of point x is the set of points reachable from x via transformation operator Lg: Ox , {Lg[x] ∈ X |g ∈ G}. (2) Equivariance A related notion to invariance is equivariance. Given a transformation operator Lg : X → X and a mapping f : X → Y , we say that f is equivariant [11, 46] to the transformation if there exists a second transformation operator Kg : Y → Y in the output space of f such that Kg[f(x)] = f(Lg[x]), for all g ∈ G, x ∈ X . (3) The operators Lg and Kg can be seen to describe the same transformation, but in different spaces. In fact, an equivariant map can be seen to map orbits to orbits. We also see that invariance is a special case of equivariance, if we set Kg to the identity operator for all g. Given Lg and Kg, we can solve for the collection of equivariant functions f satisfying the equivariance constraint. Moreover, for linear transformation operators and linear f a rich theory already exists in which f is referred to as an intertwiner [12]. In the equivariant deep learning literature, neural networks are built from interleaving intertwiners and equivariant nonlinearities. As far as we are aware, most of these methods are hand-designed per pair of transformation operators, with the exception of [13]. In this paper, we introduce a computational method to solve for intertwiners given a pair of transformation operators. 2.2 Markov Decision Processes A Markov decision process (MDP) is a tuple (S,A, R, T, γ), with state space S, action space A, immediate reward function R : S × A → R, transition function T : S × A × S → R≥0, and discount factor γ ∈ [0, 1]. The goal of solving an MDP is to find a policy π ∈ Π, π : S ×A → R≥0 (written π(a|s)), where π normalizes to unity over the action space, that maximizes the expected return Rt = Eπ[ ∑T k=0 γ krt+k+1]. The expected return from a state s under a policy π is given by the value function V π. A related object is the Q-value Qπ, the expected return from a state s after taking action a under π. V π and Qπ are governed by the well-known Bellman equations [5] (see Supplementary). In an MDP, optimal policies π∗ attain an optimal value V ∗ and corresponding Q-value given by V ∗(s) = max π∈Π V π(s) and Q∗(s) = max π∈Π Qπ(s). MDP with Symmetries Symmetries can appear in MDPs. For instance, in Figure 2 CartPole has a reflection symmetry about the vertical axis. Here we define an MDP with symmetries. In an MDP with symmetries there is a set of transformations on the state-action space, which leaves the reward function and transition operator invariant. We define a state transformation and a state-dependent action transformation as Lg : S → S and Ksg : A → A respectively. Invariance of the reward function and transition function is then characterized as R(s, a) = R(Lg[s],K s g [a]) for all g ∈ G, s ∈ S, a ∈ A (4) T (s′|s, a) = T (Lg[s′]|Lg[s],Ksg [a]) for all g ∈ G, s ∈ S, a ∈ A. (5) Written like this, we see that in an MDP with symmetries the reward function and transition operator are invariant along orbits defined by the transformations (Lg,Ksg). MDP Homomorphisms MDPs with symmetries are closely related to MDP homomorphisms, as we explain below. First we define the latter. An MDP homomorphism h [32, 30] is a mapping from one MDPM = (S,A, R, T, γ) to another M̄ = (S̄, Ā, R̄, T̄ , γ) defined by a surjective map from the state-action space S ×A to an abstract state-action space S̄ × Ā. In particular, h consists of a tuple of surjective maps (σ, {αs|s ∈ S}), where we have the state map σ : S → S̄ and the state-dependent action map αs : A → Ā. These maps are built to satisfy the following conditions R̄(σ(s), αs(a)) , R(s, a) for all s ∈ S, a ∈ A, (6) T̄ (σ(s′)|σ(s), αs(a)) , ∑ s′′∈σ−1(s′) T (s′′|s, a) for all s, s′ ∈ S, a ∈ A. (7) An exact MDP homomorphism provides a model equivalent abstraction [20]. Given an MDP homomorphism h, two state-action pairs (s, a) and (s′, a′) are called h-equivalent if σ(s) = σ(s′) and αs(a) = αs′(a′). Symmetries and MDP homomorphisms are connected in a natural way: If an MDP has symmetries Lg and Kg, the above equations (4) and (5) hold. This means that we can define a corresponding MDP homomorphism, which we define next. Group-structured MDP Homomorphisms Specifically, for an MDP with symmetries, we can define an abstract state-action space, by mapping (s, a) pairs to (a representative point of) their equivalence class (σ(s), αs(a)). That is, state-action pairs and their transformed version are mapped to the same abstract state in the reduced MDP: (σ(s), αs(a)) = ( σ(Lg[s]), αLg [s](K s g [a]) ) ∀g ∈ G, s ∈ S, a ∈ A (8) In this case, we call the resulting MDP homomorphism group structured. In other words, all the state-action pairs in an orbit defined by a group transformation are mapped to the same abstract state by a group-structured MDP homomorphism. Optimal Value Equivalence and Lifted Policies h-equivalent state-action pairs share the same optimal Q-value and optimal value function [30]. Furthermore, there exists an abstract optimal Q-value Q̄∗ and abstract optimal value function V̄ ∗, such that Q∗(s, a) = Q̄∗(σ(s), αs(a)) and V ∗(s) = V̄ ∗(σ(s)). This is known as optimal value equivalence [30]. Policies can thus be optimized in the simpler abstract MDP. The optimal abstract policy π̄(ā|σ(s)) can then be pulled back to the original MDP using a procedure called lifting 2. The lifted policy is given in Equation 9. A lifted optimal abstract policy is also an optimal policy in the original MDP [30]. Note that while other lifted policies exist, we follow [30, 32] and choose the lifting that divides probability mass uniformly over the preimage: π↑(a|s) , π̄(ā|σ(s)) |{a ∈ α−1s (ā)}| , for any s ∈ S and a ∈ α−1s (ā). (9) 3 Method The focus of the next section is on the design of MDP homomorphic networks—policy networks and value networks obeying the MDP homomorphism. In the first section of the method, we show that any 2Note that we use the terminology lifting to stay consistent with [30]. policy network satisfying the MDP homomorphism property must be an equivariant neural network. In the second part of the method, we introduce a novel numerical technique for constructing groupequivariant networks, based on the transformation operators defining the equivalence state-action pairs under the MDP homomorphism. 3.1 Lifted Policies Are Invariant Lifted policies in symmetric MDPs with group-structured symmetries are invariant under the group of symmetries. Consider the following: Take an MDP with symmetries defined by transformation operators (Lg,Ksg) for g ∈ G. Now, if we take s′ = Lg[s] and a′ = Ksg [a] for any g ∈ G, (s′, a′) and (s, a) are h-equivalent under the corresponding MDP homomorphism h = (σ, {αs|s ∈ S}). So π↑(a|s) = π̄(αs(a)|σ(s)) |{a ∈ α−1s (ā)}| = π̄(αs′(a ′)|σ(s′)) |{a′ ∈ α−1s′ (ā)}| = π↑(a′|s′), (10) for all s ∈ S, a ∈ A and g ∈ G. In the first equality we have used the definition of the lifted policy. In the second equality, we have used the definition of h-equivalent state-action pairs, where σ(s) = σ(Lg(s)) and αs(a) = αs′(a′). In the third equality, we have reused the definition of the lifted policy. Thus we see that, written in this way, the lifted policy is invariant under state-action transformations (Lg,Ksg). This equation is very general and applies for all group-structured stateaction transformations. For a finite action space, this statement of invariance can be re-expressed as a statement of equivariance, by considering the vectorized policy. Invariant Policies On Finite Action Spaces Are Equivariant Vectorized Policies For convenience we introduce a vector of probabilities for each of the discrete actions under the policy π(s) , [π(a1|s), π(a2|s), ..., π(aN |s)]> , (11) where a1, ..., aN are the N possible discrete actions in action spaceA. The action transformation Ksg maps actions to actions invertibly. Thus applying an action transformation to the vectorized policy permutes the elements. We write the corresponding permutation matrix as Kg . Note that K−1g π(s) , [ π(Ksg [a1]|s), π(Ksg [a2]|s), ..., π(Ksg [aN ]|s) ]> , (12) where writing the inverse K−1g instead of Kg is required to maintain the property KgKh = Kgh. The invariance of the lifted policy can then be written as π↑(s) = K−1g π↑(Lg[s]), which can be rearranged to the equivariance equation Kgπ↑(s) = π↑(Lg[s]) for all g ∈ G, s ∈ S, a ∈ A. (13) This equation shows that the lifted policy must satisfy an equivariance constraint. In deep learning, this has already been well-explored in the context of supervised learning [11, 12, 46, 47, 43]. Next, we present a novel way to construct such networks. 3.2 Building MDP Homomorphic Networks Our goal is to build neural networks that follow Eq. 13; that is, we wish to find neural networks that are equivariant under a set of state and policy transformations. Equivariant networks are common in supervised learning [11, 12, 46, 47, 43, 41]. For instance, in semantic segmentation shifts and rotations of the input image result in shifts and rotations in the segmentation. A neural network consisting of only equivariant layers and non-linearities is equivariant as a whole, too3 [11]. Thus, once we know how to build a single equivariant layer, we can simply stack such layers together. Note that this is true regardless of the representation of the group, i.e. this works for spatial transformations of the input, feature map permutations in intermediate layers, and policy transformations in the output layer. For the experiments presented in this paper, we use the same group representations for the intermediate layers as for the output, i.e. permutations. For finite groups, such as cyclic groups or permutations, pointwise nonlinearities preserve equivariance [11]. In the past, learnable equivariant layers were designed by hand for each transformation group individually [11, 12, 46, 47, 44, 43, 41]. This is time-consuming and laborious. Here we present a novel way to build learnable linear layers that satisfy equivariance automatically. Equivariant Layers We begin with a single linear layer z′ = Wz + b, where W ∈ RDout×Din and b ∈ RDin is a bias. To simplify the math, we merge the bias into the weights so W 7→ [W,b] and z 7→ [z, 1]>. We denote the space of the augmented weights asWtotal. For a given pair of linear group transformation operators in matrix form (Lg,Kg), where Lg is the input transformation and Kg is the output transformation, we then have to solve the equation KgWz = WLgz, for all g ∈ G, z ∈ RDin+1. (14) Since this equation is true for all z we can in fact drop z entirely. Our task now is to find all weights W which satisfy Equation 14. We label this space of equivariant weights asW , defined as W , {W ∈ Wtotal | KgW = WLg, for all g ∈ G}, (15) again noting that we have dropped z. To find the spaceW notice that for each g ∈ G the constraint KgW = WLg is in fact linear in W. Thus, to findW we need to solve a set of linear equations in W. For this we introduce a construction, which we call a symmetrizer S(W). The symmetrizer is S(W) , 1 |G| ∑ g∈G K−1g WLg. (16) S has three important properties, of which proofs are provided in Appendix A. First, S(W) is symmetric (S(W) ∈ W). Second, S fixes any symmetric W: (W ∈ W =⇒ S(W) = W). These properties show that S projects arbitrary W ∈ Wtotal to the equivariant subspaceW . Since W is the solution set for a set of simultaneous linear equations, W is a linear subspace of the space of all possible weights Wtotal. Thus each W ∈ W can be parametrized as a linear combination of basis weights {Vi}ri=1, where r is the rank of the subspace and span({Vi}ri=1) = W . To find as basis for W, we take a Gram-Schmidt orthogonalization approach. We first sample weights in the total spaceWtotal and then project them into the equivariant subspace with the symmetrizer. We do this for multiple weight matrices, which we then stack and feed through a singular value decomposition to find a basis for the equivariant space. This procedure is outlined in Algorithm 1. Any equivariant layer can then be written as a linear combination of bases W = r∑ i=1 ciVi, (17) where the ci’s are learnable scalar coefficients, r is the rank of the equivariant space, and the matrices Vi are the basis vectors, formed from the reshaped right-singular vectors in the SVD. An example is shown in Figure 3. To run this procedure, all that is needed are the transformation operators Lg and Kg . Note we do not need to know the explicit transformation matrices, but just to be able to perform the mappings W 7→WLg and W 7→ K−1g W. For instance, some matrix Lg rotates an image patch, but we could equally implement WLg using a built-in rotation function. Code is available 4. 4 Experiments We evaluated three flavors of MDP homomorphic network—an MLP, a CNN, and an equivariant feature extractor—on three RL tasks that exhibit group symmetry: CartPole, a grid world, and Pong. 3See Appendix B for more details. 4https://github.com/ElisevanderPol/symmetrizer/ Algorithm 1 Equivariant layer construction 1: Sample N weight matrices W1,W2, ...,WN ∼ N (W; 0, I) for N ≥ dim(Wtotal) 2: Symmetrize samples: W̄i = S(Wi) for i = 1, ..., N 3: Vectorize samples and stack as W̄ = [vec(W̄1), vec(W̄2), ...] 4: Apply SVD: W̄ = UΣV> 5: Keep first r = rank(W̄) right-singular vectors (columns of V) and unvectorize to shape of Wi We use RLPYT [36] for the algorithms. Hyperparameters (and the range considered), architectures, and group implementation details are in the Supplementary Material. Code is available 5. 4.1 Environments For each environment we show S and A with respective representations of the group transformations. CartPole In the classic pole balancing task [3], we used a two-element group of reflections about the y-axis. We used OpenAI’s Cartpole-v1 [7] implementation, which has a 4-dimensional observation vector: (cart position x, pole angle θ, cart velocity ẋ, pole velocity θ̇). The (discrete) action space consists of applying a force left and right (←,→). We chose this example for its simple symmetries. Grid world We evaluated on a toroidal 7-by-7 predator-prey grid world with agent-centered coordinates. The prey and predator are randomly placed at the start of each episode, lasting a maximum of 100 time steps. The agent’s goal is to catch the prey, which takes a step in a random compass direction with probability 0.15 and stands still otherwise. Upon catching the prey, the agent receives a reward of +1, and -0.1 otherwise. The observation is a 21× 21 binary image identifying the position of the agent in the center and the prey in relative coordinates. See Figure 6a. This environment was chosen due to its four-fold rotational symmetry. Pong We evaluated on the RLPYT [36] implementation of Pong. In our experiments, the observation consisted of the 4 last observed frames, with upper and lower margins cut off and downscaled to an 80 × 80 grayscale image. In this setting, there is a flip symmetry over the horizontal axis: if we flip the observations, the up and down actions also flip. A curious artifact of Pong is that it has duplicate (up, down) actions, which means that to simplify matters, we mask out the policy values for the second pair of (up, down) actions. We chose Pong because of its higher dimensional state space. Finally, for Pong we additionally compare to two data augmentation baselines: stochastic data augmentation, where for each state, action pair we randomly transform them or not before feeding them to the network, and the second an equivariant version of [16] and similar to [35], where both state and transformed state are input to the network. The output of the transformed state is appropriately transformed, and both policies are averaged. 4.2 Models We implemented MDP homomorphic networks on top of two base architectures: MLP and CNN (exact architectures in Supplementary). We further experimented with an equivariant feature extractor, appended by a non-equivariant network, to isolate where equivariance made the greatest impact. Basis Networks We call networks whose weights are linear combinations of basis weights basis networks. As an ablation study on all equivariant networks, we sought to measure the effects of the basis training dynamics. We compared an equivariant basis against a pure nullspace basis, i.e. an explicitly non-symmetric basis using the right-null vectors from the equivariant layer construction, and a random basis, where we skip the symmetrization step in the layer construction and use the full rank basis. Unless stated otherwise, we reduce the number of ‘channels’ in the basis networks compared to the regular networks by dividing by the square root of the group size, ending up with a comparable number of trainable parameters. 5https://github.com/ElisevanderPol/mdp-homomorphic-networks 4.3 Results and Discussion We show training curves for CartPole in 4a-4b, Pong in Figure 4c and for the grid world in Figure 6. Across all experiments we observed that the MDP homomorphic network outperforms both the non-equivariant basis networks and the standard architectures, in terms of convergence speed. This confirms our motivations that building symmetry-preserving policy networks leads to faster convergence. Additionally, when compared to the data augmentation baselines in Figure 5, using equivariant networks is more beneficial. This is consistent with other results in the equivariance literature [4, 42, 44, 46]. While data augmentation can be used to create a larger dataset by exploiting symmetries, it does not directly lead to effective parameter sharing (as our approach does). Note, in Pong we only train the first 15 million frames to highlight the difference in the beginning; in constrast, a typical training duration is 50-200 million frames [25, 36]. For our ablation experiment, we wanted to control for the introduction of bases. It is not clear a priori that a network with a basis has the same gradient descent dynamics as an equivalent ‘basisless’ network. We compared equivariant, non-equivariant, and random bases, as mentioned above. We found the equivariant basis led to the fastest convergence. Figures 4a and 4c show that for CartPole and Pong the nullspace basis converged faster than the random basis. In the grid world there was no clear winner between the two. This is a curious result, requiring deeper investigation in a follow-up. For a third experiment, we investigated what happens if we sacrifice complete equivariance of the policy. This is attractive because it removes the need to find a transformation operator for a flattened output feature map. Instead, we only maintained an equivariant feature extractor, compared against a basic CNN feature extractor. The networks built on top of these extractors were MLPs. The results, in Figure 4c, are two-fold: 1) Basis feature extractors converge faster than standard CNNs, and 2) the equivariant feature extractor has fastest convergence. We hypothesize the equivariant feature extractor is fastest as it is easiest to learn an equivariant policy from equivariant features. We have additionally compared an equivariant feature extractor to a regular convolutional network on the Atari game Breakout, where the difference between the equivariant network and the regular network is much less pronounced. For details, see Appendix C. 5 Related Work Past work on MDP homomorphisms has often aimed at discovering the map itself based on knowledge of the transition and reward function, and under the assumption of enumerable state spaces [30, 31, 32, 38]. Other work relies on learning the map from sampled experience from the MDP [39, 6, 23]. Exactly computing symmetries in MDPs is graph isomorphism complete [27] even with full knowledge of the MDP dynamics. Rather than assuming knowledge of the transition and reward function, and small and enumerable state spaces, in this work we take the inverse view: we assume that we have an easily identifiable transformation of the joint state–action space and exploit this knowledge to learn more efficiently. Exploiting symmetries in deep RL has been previously explored in the game of Go, in the form of symmetric filter weights [33, 8] or data augmentation [35]. Other work on data augmentation increases sample efficiency and generalization on well-known benchmarks by augmenting existing data points state transformations such as random translations, cutout, color jitter and random convolutions [16, 9, 17, 19]. In contrast, we encode symmetries into the neural network weights, leading to more parameter sharing. Additionally, such data augmentation approaches tend to take the invariance view, augmenting existing data with state transformations that leave the state’s Q-values intact [16, 9, 17, 19] (the exception being [21] and [24], who augment trajectories rather than just states). Similarly, permutation invariant networks are commonly used in approaches to multi-agent RL [37, 22, 15]. We instead take the equivariance view, which accommodates a much larger class of symmetries that includes transformations on the action space. Abdolhosseini et al. [1] have previously manually constructed an equivariant network for a single group of symmetries in a single RL problem, namely reflections in a bipedal locomotion task. Our MDP homomorphic networks allow for automated construction of networks that are equivariant under arbitrary discrete groups and are therefore applicable to a wide variety of problems. From an equivariance point-of-view, the automatic construction of equivariant layers is new. [12] comes close to specifying a procedure, outlining the system of equations to solve, but does not specify an algorithm. The basic theory of group equivariant networks was outlined in [11, 12] and [10], with notable implementations to 2D roto-translations on grids [46, 43, 41] and 3D roto-translations on grids [45, 44, 42]. All of these works have relied on hand-constructed equivariant layers. 6 Conclusion This paper introduced MDP homomorphic networks, a family of deep architectures for reinforcement learning problems where symmetries have been identified. MDP homomorphic networks tie weights over symmetric state-action pairs. This weight-tying leads to fewer degrees-of-freedom and in our experiments we found that this translates into faster convergence. We used the established theory of MDP homomorphisms to motivate the use of equivariant networks, thus formalizing the connection between equivariant networks and symmetries in reinforcement learning. As an innovation, we also introduced the first method to automatically construct equivariant network layers, given a specification of the symmetries in question, thus removing a significant implementational obstacle. For future work, we want to further understand the symmetrizer and its effect on learning dynamics, as well as generalizing to problems that are not fully symmetric. 7 Acknowledgments and Funding Disclosure Elise van der Pol was funded by Robert Bosch GmbH. Daniel Worrall was funded by Philips. F.A.O. received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758824 —INFLUENCE). Max Welling reports part-time employment at Qualcomm AI Research. 8 Broader Impact The goal of this paper is to make (deep) reinforcement learning techniques more efficient at solving Markov decision processes (MDPs) by making use of prior knowledge about symmetries. We do not expect the particular algorithm we develop to lead to immediate societal risks. However, Markov decision processes are very general, and can e.g. be used to model problems in autonomous driving, smart grids, and scheduling. Thus, solving such problems more efficiently can in the long run cause positive or negative societal impact. For example, making transportation or power grids more efficient, thereby making better use of scarce resources, would be a significantly positive impact. Other potential applications, such as in autonomous weapons, pose a societal risk [28]. Like many AI technologies, when used in automation, our technology can have a positive impact (increased productivity) and a negative impact (decreased demand) on labor markets. More immediately, control strategies learned using RL techniques are hard to verify and validate. Without proper precaution (e.g. [40]), employing such control strategies on physical systems thus run the risk of causing accidents involving people, e.g. due to reward misspecification, unsafe exploration, or distributional shift [2].
1. What is the main contribution of the paper regarding deep reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of notation, novelty, and potential practicality? 3. What are the weaknesses of the paper, especially regarding empirical evidence, data augmentation, scalability, and comparisons with other works? 4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a family of deep nets that can incorporate equivariance properties for deep reinforcement learning. The contributions are two-folded, (1) they formalized the relation between equivariance properties for deep RL using MDP homomorphism, and (2) they proposed a novel algorithm to build equivariant layers. Experiments were conducted on CartPole, grid world and Pong environments demonstrating that incorporating equivariance properties lead to faster convergence. Strengths 1. Overall, I find the paper easy to read and motivational. I agree with the authors that deep RL rarely include equivariance properties into their modeling and their work could be useful at improving the data efficiency of deep RL. 2. The notation and approach are described clearly. In particular, the parts charactering the relationship between MDP Homomorphisms and equivariance properties is interesting and novel. While the result is somewhat expected, I think it deserves contribution and benefits the community to express the relations this clear manner. 3. Aside from the contribution to RL, this work also proposes an approach to automatically build equivariant layers rather than handcrafting them, which has the potential for making equivariant layers more practical and possibly adapt more into RL systems. 4. Code is included with submission and reported the range of hyperparameters used. Weaknesses Here are some concerns with the paper: a. The presented empirical evidence could be strengthened. In particular, the environments of CartPole, gird world, and Pong, are relatively toy examples for RL. More challenging environments, e.g. other atari games, would have made the results more convincing. b. From my understanding, the baselines are without data augmentations. Would a data augmentation approach be just as effective? For example, a flip symmetry would only double the amount of data being processed. Data augmentation also does not require hand-constructed layers and is easy to implement. A comparison will further demonstrate the effectiveness of the proposed approach. c. Will the proposed approach run into scalability issues if G is large? For example, when G is a permutation group. Minor: d. Prior works have also explored equivariance properties in RL, e.g., [A, B], which use graph neural networks for permutation equivariance/invariant garunatess in multi-agent settings. e. Is there a typo at Line 118 for Q*? It should be Q*(s,a), e.g. line 139. [A] Liu, Iou-Jen, Raymond A. Yeh, and Alexander G. Schwing. "Pic: Permutation invariant critic for multi-agent deep reinforcement learning." Conference on Robot Learning. 2019 [B] Jiang, Jiechuan, et al. "Graph Convolutional Reinforcement Learning." International Conference on Learning Representations. 2020
NIPS
Title Reinforcement Learning with Neural Radiance Fields Abstract It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to train an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for the downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors. Video: https://dannydriess.github.io/nerf-rl 1 Introduction The sample efficiency of reinforcement learning (RL) algorithms crucially depends on the representation of the underlying system state they operate on [1, 2, 3, 4, 5, 6, 7]. Sometimes, a low-dimensional (direct) representation of the state, such as the positions of the objects in the environment, is considered to make the resulting RL problem most efficient [2]. However, such low-dimensional, direct state representations can have several disadvantages. On the one hand, a perception module, e.g., pose estimation, is necessary in the real world to obtain the representation from raw observations, which often is difficult to achieve in practice with sufficient robustness. On the other hand, if the goal is to learn policies that generalize over different object shapes [8], using a low-dimensional state representation is often impractical. Such scenarios, while challenging for RL, are common, e.g., in robotic manipulation tasks. Therefore, there is a large history of approaches that consider RL directly from raw, high-dimensional observations like images (e.g., [9, 10]). Typically, an encoder takes the high-dimensional input and maps it to a low-dimensional latent representation of the state. The RL algorithm (e.g., the Q-function or the policy network) then operates on the latent vector as state input. This way, no separate perception module is necessary, the framework can extract information from the raw observations that are relevant for the task, and the RL agent, in principle, may generalize over challenging environments, in which, e.g., object shapes are varied. While these are advantages in principle, jointly training encoders capable of processing high-dimensional inputs from the RL signal alone is challenging. To address this, one approach is to pretrain the encoder on a different task, e.g., image reconstruction [1, 4, 11], multi-view consistency [6], or a time-constrastive task [3]. Alternatively, an auxiliary loss on the latent encoding can be added during the RL procedure [5]. In both cases, the choice of the actual (auto-)encoder architecture and associated (auxiliary) loss function has a significant influence on the usefulness of the resulting latent space for the downstream ∗equal contribution. Correspondence: [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). RL task. Especially for image data, convolutional neural networks (CNNs) are commonly used for the encoder [12]. However, 2D CNNs have a 2D (equivariance) bias, while for many RL tasks, the 3D structure of our world is essential. Architectures like Vision Transformers [13, 14] process images with no such direct 2D bias, but they often require large scale data, which might be challenging in RL applications. Additionally, although multiple uncalibrated 2D image inputs can be used with generic image encoders [15], they do not benefit from 3D inductive biases, which may help for example in resolving ambiguities in 2D images such as occlusions and object permanence. Recently, Neural Radiance Fields (NeRFs) [16] have shown great success in learning to represent scenes with a neural network that enables to render the scene from novel viewpoints, and have sparked broad interest in computer vision [17]. NeRFs exhibit a strong 3D inductive bias, leading to better scene reconstruction capabilities than methods composed of generic image encoders (e.g., [18]). In the present work, we investigate whether incorporating these 3D inductive biases of NeRFs into learning a state representation can benefit RL. Specifically, we propose to train an encoder that maps multiple RGB image views of the scene to a latent representation through an auto-encoder structure, where a (compositional) NeRF decoder provides the self-supervision signal using an image reconstruction loss for each view. In the experiments, we show for multiple environments that supervision from NeRF leads to a latent representation that makes the downstream RL procedure more sample efficient compared to supervision via a 2D CNN decoder, a contrastive loss on the latent space, or even hand-engineered, perfect low-level state information given as keypoints. Commonly, RL is trained on environments where the objects have the same shape. Our environments include hanging mugs on hooks, pushing objects on a table, and a door opening scenario. In all of these, the objects’ shapes are not fixed, and we require the agent to generalize over all shapes from a distribution. To summarize our main contributions: (i) we propose to train state representations for RL with NeRF supervision, and (ii) we empirically demonstrate that an encoder trained with a latent-conditioned NeRF decoder, especially with an object-compositional NeRF decoder, leads to increased RL performance relative to standard 2D CNN auto-encoders, contrastive learning, or expert keypoints. 2 Related Work Neural Scene/Object Representations in Computer Vision, and Applications. To our knowledge, the present work is the first to explore if neural scene representations like NeRFs can benefit RL. Outside of RL, however, there has been a very active research field in the area of neural scene representations, both in the representations themselves [19, 20, 21, 22] and their applications; see [23, 24, 17] for recent reviews. Within the family of NeRFs and related methods, major thrusts of research have included: improving modeling formulations [25, 26], modeling larger scenes [26, 27], addressing (re-)lighting [28, 29, 30], and an especially active area of research has been in improving speed, both of training and of inference-time rendering [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In our case, we are not constrained by inference-time computation issues, since we do not need to render images, and only have to run our latent-space encoder (with a runtime of approx. 7 ms on an RTX3090). Additionally of particular relevance, various methods have developed latent-conditioned [42, 43, 44] or compositional/object-oriented approaches for NeRFs [45, 46, 47, 48, 49, 50, 51, 52, 53], although they, nor other NeRF-style methods to our knowledge, have been applied to RL. Neural scene representations have found application across many fields (i.e., augmented reality and medical imaging [54]) and both NeRFs [55, 56, 57, 58] and other neural scene approaches [59, 60, 61, 62] have started to be used for various problems in robotics, including pose estimation [55], trajectory planning [56], visual foresight [11, 53], grasping [59, 57], and rearrangement tasks [60, 61, 58]. Learning State Representations for Reinforcement Learning. One of the key enabling factors for the success of deep RL is its ability to find effective representations of the environment from high-dimensional observation data [10, 63]. Extensive research has gone into investigating different ways to learn better state representations using various auxiliary objective functions. Contrastive learning is a common objective and has shown success in unsupervised representation learning in computer vision applications [64, 65]. Researchers built upon this success and have shown such learning objectives can lead to better performance and sample efficiency in deep RL [66, 67], where the contrasting signals could come from time alignment [68, 3], camera viewpoints [69], and different sensory modalities [70], with applications in real-world robotic tasks [6, 71]. Extensive efforts have investigated the role of representation learning in RL [72], provided a detailed analysis of the importance of different visual representation pretraining methods [73], and shown how we can improve training stability in the face of multiple auxiliary losses [74]. There is also a range of additional explorations on pretraining methods with novel objective functions (e.g., bisimulation metrics [75] and temporal cycle-consistency loss [76]) and less-explored data sources (e.g., in-thewild images [77] and action-free videos [78]). Please check the survey for more related work in this direction [79]. Our method is different in that we explicitly utilize a decoder that includes strong 3D inductive biases provided by NeRFs, which we empirically show improves RL for tasks that depend on the geometry of the objects. 3 Background 3.1 Reinforcement Learning This work considers decision problems that can be described as discrete-time Markov Decision Processes (MDPs) M = ⟨S,A, T, γ,R, P0⟩. S and A are the sets of all states and actions, respectively. The transition probability (density) from s to s′ using an action a is T (s′ | s, a). The agent receives a real-valued reward R(s, a, s′) after each step. The discount factor γ ∈ [0, 1) trades off immediate and future rewards. P0 : S → R+0 is the distribution of the start state. RL algorithms try to find the optimal policy π∗ : S × A → R+0 , where π∗ = argmaxπ ∑∞ t=0 γ tEst+1∼T (·|st,at), at∼π(·|st),s0∼P0 [R(st, at, st+1)] . Importantly, in this work, we consider RL problems where the state s encodes both the position and the shape of the objects in the scene. We require the RL agent to generalize over all of these shapes at test time. We can therefore think of the state as a tuple s = (sp, ss), where sp encodes positional information, and ss encodes the shapes involved. We focus the experiments on sparse reward settings, meaning R(s, a, s′) = R0 > 0 for s′ ∈ Sg and R(s, a, s′) = 0 for s ∈ S\Sg, where the volume of Sg ⊂ S is much smaller than the volume of S. The state space S usually is low-dimensional or a minimal description of the degrees of freedom of the system. In this work, we consider that the RL algorithm has only access to a (high-dimensional) observation y ∈ Y of the scene (e.g., RGB images). In particular, this means that the policy has observations as input a ∼ π(· | y). Since we assume that the underlying state s = (sp, ss) is fully observable from y, we can treat y like a state for an MDP. Reinforcement Learning with Learned Latent Scene Representations. The general idea of RL with learned latent scene representations is to learn an encoder Ω that maps an observation y ∈ Y to a k-dimensional latent vector z = Ω(y) ∈ Z ⊂ Rk of the scene. The actual RL components, e.g., the Q-function or policy, then operate on z as its state description. For a policy π, this means that the action a ∼ π(· | z) = π(· | Ω(y)) is conditional on the latent vector z instead of the observation y directly. The dimension k of the latent vector is typically (much) smaller than that of the observation space Y , but larger than that of the state space S. 3.2 Neural Radiance Fields (NeRFs) The general idea of NeRF, originally proposed by [16], is to learn a function f = (σ, c) that predicts the emitted RGB color value c(x) ∈ R3 and volume density σ(x) ∈ R≥0 at any 3D world coordinate x ∈ R3. Based on f , an image from an arbitrary view and camera parameters can be rendered by computing the color C(r) ∈ R3 of each pixel along its corresponding camera ray r(α) = r(0) + αd through the volumetric rendering relation C(r) = ∫ αf αn Tf (r, α)σ(r(α))c(r(α)) dα with Tf (r, α) = exp ( − ∫ α αn σ(r(u)) du ) . (1) Here, r(0) ∈ R3 is the camera origin, d ∈ R3 the pixel dependent direction of the ray and αn, αf ∈ R the near and far bounds within which objects are expected, respectively. The camera rays are determined from the camera matrix K (intrinsics and extrinsics) describing the desired view. 4 Learning State Representations for RL with NeRF Supervision This section describes our proposed framework, in which we use a latent state space for RL that is learned from NeRF supervision. For learning the latent space, we use an encoder-decoder where the Latent-conditioned Compositional NeRF Decoder decoder is a latent-conditioned NeRF, which may either be a global [42, 43, 44] or a compositional NeRF decoder [53]. To our knowledge, no prior work has used such NeRF-derived supervision for RL. In Sec. 4.1 we describe this proposition, Sec. 4.2 provides an overview of the encoder-decoder training, Sec. 4.3 and Sec. 4.4 introduce options for the NeRF decoder and encoder, respectively. 4.1 Using Latent-Conditioned NeRF for RL We propose the state representation z on which an RL algorithm operates to be a latent vector produced by an encoder that maps images from multiple views to a latent z, which is trained with a (compositional) latent-conditioned NeRF decoder. As will be verified in experiments, we hypothesize that this framework is beneficial for the downstream RL task, as it produces latent vectors that represent the actual 3D geometry of the objects in the scene, can handle multiple objects well, as well as fuse multiple views in a consistent way to deal with occlusions by providing shape completion, all of which is relevant to solve tasks where the geometry is important. There are two steps to our framework, as shown in Fig. 1. First, we train the encoder + decoder from a dataset collected by random interactions with the environment, i.e., we do not yet need a trained policy. Second, we take the encoder trained in the first step, which we leave frozen, and use the latent space to train an RL policy. Note that we investigate two variants of the auto-encoder framework, a global one, where the whole scene is represented by one single latent vector, and a compositional one, where objects are represented by their own latent vector. For the latter, objects are identified by masks in the views. 4.2 Overview: Auto-Encoder with Latent-Conditioned NeRF Decoder Assume that an observation y = ( I1:V ,K1:V ,M1:V ) of the scene consists of RGB images Ii ∈ R3×h×w, i = 1, . . . , V taken from V many camera views, their respective camera projection matrices Ki ∈ R3×4 (including both intrinsics and extrinsics), and per-view image masks M1:V . For a global NeRF decoder, these are global non-background masks M itot ∈ {0, 1}h×w, and for a compositional NeRF decoder as in [53], these are sets of binary masks M ij ∈ {0, 1} h×w that identify the objects j = 1, . . . ,m in the scene in view i. The global case is equivalent to m = 1, M ij=1 = M i tot. The encoder Ω maps these posed image observations from the multiple views into a set of latent vectors z1:m, where each zj represents each object in the scene separately in the compositional case, or the single z1 all objects in the scene. This is achieved by querying Ω on the masks M1:Vj , i.e., zj = Ω ( I1:V ,K1:V ,M1:Vj ) ∈ Rk (2) for object j. The supervision signal to train the encoder is the image reconstruction loss Li = ∥∥Ii ◦M itot −D (Ω (I1:V ,K1:V ,M1:V1:m) ,Ki)∥∥22 (3) on the input view i where the decoder D renders an image I = D(z1:m,K) for arbitrary views specified by the camera matrix K from the set of latent vectors z1:m. Both the encoder and decoder are trained end-to-end at the same time. The target images for the decoder are the same in both the global and compositional case: the global-masked image Ii ◦M itot (◦ is the element-wise product). In the compositional case this can be computed with M itot = ∨m j=1 M i j . By fusing the information from multiple views of the objects into the latent vector from which the decoder has to be able to render the scene from multiple views, this auto-encoder framework can learn latent vectors that represent the 3D configurations (shape and pose) of the objects in the scene. 4.3 Latent-Conditioned NeRF Decoder Details Global. The original NeRF formulation [16] learns a fully connected network f that represents one single scene (Sec. 3.2). In order to create a decoder from NeRFs within an auto-encoder to learn a latent space, we condition the NeRF f(·, z) on the latent vector z ∈ Rk [42, 43, 44]. While approaches such as [42, 43, 44] use the latent code to represent factors such as lighting or categorylevel generalization, in our case the latent code is intended to represent the scene variation, i.e., shape and configuration of objects, such that a downstream RL agent may use this as a state representation. Compositional. In the compositional case, the encoder produces a set of latent vectors z1:m describing each object j = 1, . . . ,m individually, this leads to m many NeRFs (σj(x), cj(x)) = fj(x) = f(x, zj), j = 1, . . . ,m with their associated volume density σj and color value cj . Note that while one could use different networks fj with their own network weights for each object, we have a single network f for all objects. This means that both the object’s pose as well as its shape and type are represented through the latent code zj . In order to force those conditioned NeRFs to learn the 3D configuration of each object separately, we compose them into a global NeRF model with the composition formulas (proposed e.g., by [80, 81]): σ(x) = ∑m j=1 σj(x), c(x) = 1σ(x) ∑m j=1 σj(x)cj(x). As this composition happens in 3D space, the latent vectors will be learned such that they correctly represent the actual shape and pose of the objects in the scene with respect to the other objects, which we hypothesize may be useful for the downstream RL agent. 4.4 Encoder Details The encoder Ω operates by fusing multiple views together to estimate the latent vector for the RL task. Since the scientific question of this work is to investigate whether a decoder built from NeRFs to train the encoder end-to-end is beneficial for RL, we consider two different encoder architectures. The first one is a 2D CNN that averages feature encodings from the different views, where each encoding is additionally conditioned on the camera matrix of that view. The second one is based on a learned 3D neural vector field that incorporates 3D biases by fusing the different camera views in 3D space through 3D convolutions and camera projection. This way, we are able to distinguish between the importance of 3D priors incorporated into the encoder versus the decoder. Per-image CNN Encoder (“Image encoder”). For the global version, we utilize the network architecture from [11] as an encoder choice. In order to work with multiple objects in the compositional case, we modify the architecture from [11] by taking the object masks into account as follows. For each object j, the 2D CNN encoder computes zj = ΩCNN ( I1:V ,K1:V ,M1:Vj ) = hMLP ( 1 V V∑ i=1 gMLP ( ECNN ( Ii ◦M ij ) ,Ki )) . (4) ECNN is a ResNet-18 [82] CNN feature extractor that determines a feature from the masked input image Ii ◦M ij of object j for each view i, which is then concatenated with the (flattened) camera matrix. The output of the network gMLP is hence the encoding of each view, including the camera information, which is averaged and then processed with hMLP, to produce the final latent vector. Note that in the global case, we set m = 1, M ij=1 = M i tot such that ΩCNN produces a single latent vector. Neural Field 3D CNN Encoder (“Field encoder”). Several authors [43] have considered to incorporate 3D biases into learning an encoder by computing pixel-aligned features from queried 3D locations of the scene to fuse the information from the different camera views directly in 3D space. We utilize the encoder architecture from [53], where the idea is to learn a neural vector field ϕ [ I1:V ,M1:Vj ] : R3 → RE over 3D space, conditioned on the input views and masks. The features of ϕ are computed from projecting the query point into the camera coordinate system from the respective view. To turn ϕ into a latent vector, it is queried on a workspace set Xh ∈ RdX×hX×wX (a 3D grid) and then processed by a 3D convolutional network, i.e., zj = E3D CNN ( ϕ [ I1:V ,M1:Vj ] (Xh) ) . This method differs from [43, 83, 60] by computing a latent vector from the pixel-aligned features. 5 Baselines / Alternative State Representations In this section, we briefly describe alternative ways of training an encoder for RL, which we will investigate in the experiments as baselines and ablations. For details, refer to the appendix. Conv. Autoencoder. This baseline uses a standard CNN decoder based on deconvolutions instead of NeRF to reconstruct the image from the latent representation, similar to [1]. Therefore, with this baseline we investigate the influence of the NeRF decoder relative to CNN decoders. We follow the architecture of [11] for the deconvolution part for the global case. In the compositional case, we modify the architecture to be able to deal with a set of individual latent vectors instead of a single, global one. The image I = Ddeconv(gMLP( 1m ∑m j=1 zj),K) is rendered from z1:m by first averaging the latent vectors and then processing the averaged vector with a fully connected network gMLP, leading to an aggregated feature. This aggregated feature is concatenated with the (flattened) camera matrix K describing the desired view and then rendered into the image with Ddeconv. In the experiments, we utilize this decoder as the supervision signal to train the latent space produced by the 2D CNN encoder from Sec. 4.4. In the compositional version, the 2D CNN encoder (4) use the same object masks as the compositional NeRF-RL variant. Contrastive Learning. As an alternative to learning an encoder via a reconstruction loss, the idea of contrastive learning [84] is to define a loss function directly on the latent space that tries to pull latent vectors describing the same configurations together (called positive samples) while ones representing different system states apart (called negative samples). A popular approach to achieve this is with the InfoNCE loss [85, 64]. Let yi and ỹi be two different observations of the same state. Here, ·̃ denotes a perturbed/augmented version of the observation. For a mini-batch of observations {(yi, ỹi)}ni=1, after encoding those into their respective latent vectors zi = Ω(yi), z̃i = Ω(ỹi) with the encoder Ω, the loss for that batch would use (zi, z̃i) as a positive pair, and (zi,z̸̃=i) as a negative pair, or some similar variation. A crucial question in contrastive learning is how the observation y is perturbed/augmented into ỹ to generate positive and negative training pairs, described in the following. CURL. In CURL [5], the input image is randomly cropped to generate y and ỹ. We closely follow the hyperparameters and design of [5]. CURL operates on a single input view and we choose a view for this baseline from which the state of the environment can be inferred as best as possible (Fig. 17). Multi-View CURL. This baseline investigates if the neural field 3D encoder (Sec. 4.4) can be trained with a contrastive loss. As this encoder operates on multiple input views we double the number of available camera views. Half of the views are the same as in the other experiments, the other half are captured from sightly perturbed camera angles. We use the same loss as CURL, but with different contrastive pairs – rather than from augmentation, the contrastive style is taken from TCN [68]: the positive pairs come from different views but at the same moment in time, while negative pairs come from different times. Therefore, this baseline can be seen as a multi-view adaptation of CURL [5]. Direct State / Keypoint Representations. Finally, we also consider a direct, low-dimensional representation of the state. Since we are interested in generalizing over different object shapes, we consider multiple 3D keypoints that are attached at relevant locations of the objects by expert knowledge and observed with a perfect keypoint detector [8]. See Fig. 2b for a visualization of those keypoints. The keypoints both provide information about object shape and its pose. Furthermore, as seen in Fig. 2b, they have been chosen to reflect those locations in the environment relevant to solve the task. Additionally, we report results where the state is represented by the poses of the objects – as this cannot represent object shape, in this case we use a constant object shape for training and test. 6 Experiments We evaluate our proposed method on different environments where the geometry of the objects in the scene is important to solve the task successfully. Please also refer to the video https://dannydriess.github.io/nerf-rl. Commonly, RL is trained and evaluated on a single environment, where only the poses are changed, but the involved object shapes are kept constant. Since latent-conditioned NeRFs have been shown to be capable of generalizing over geometry [43], we consider experiments where we require the RL agent to generalize over object shapes within some distribution. Answering the scientific question of this work requires environments with multi-view observations — and for the compositional versions object masks as well. These are not provided in standard RL benchmarks, which is the reason for choosing the environments investigated in this work. We use PPO [86] as the RL algorithm and four camera views in all experiments. Refer to the appendix for more details about our environments, parameter choices, network architectures, and training times. 6.1 Environments Mug on Hook. In this environment, adopted from [87] and visualized in Fig. 2b, the task is to hang a mug on a hook. Both the mug and the hook shape are randomized. The actions are small 3D translations applied to the mug. This environment is challenging as we require the RL agent to generalize over mug and hook shapes and the tolerance between the handle opening and the hook is relatively small. Further, the agent receives a sparse reward only if the mug has been hung stably. This reward is calculated by virtually simulating a mug drop after each action. If the mug does not fall onto the ground from the current state, a reward of one is assigned, otherwise zero. Planar Pushing. The task in this environment, shown in Fig. 3b, is to push yellow box-shaped objects into the left region of the table and blue objects into the right region with the red pusher that can move in the plane, i.e., the action is two dimensional. This is the same environment as in [53] with the same four different camera views. Each run contains a single object on the table (plus the pusher). If the box has been pushed inside its respective region, a sparse reward of one is received, otherwise zero. The boxes in the environment have different sizes, two colors and are randomly initialized. In this environment, we cannot use keypoints for the multi-shape setting, as the reward depends on the object color; we evaluate the keypoints baseline only in the single shape case (Appendix). Door Opening. Fig. 4b shows the door environment, where the task is to open a sliding door with the red end-effector that can be translated in 3 DoFs as the action. To solve this task, the agent has to push on the door handle. As the handle position and size is randomized, the agent has to learn to interact with the handle geometry accordingly. Interestingly, as can be seen in the video in the supplementary material, the agent often chooses to push on the handle only at the beginning, as, afterwards, it is sufficient to push the door itself at its side. The agent receives a sparse reward if the door has been opened sufficiently, otherwise, zero reward is assigned. 6.2 Results Figs 2a, 3a, 4a show success rates (averaged over 6 independent experiment repetitions and over 30 test rollouts per repetition per timestep) as a function of training steps. Also shown are the 68% confidence intervals. These success rates have been evaluated using randomized object shapes and initial conditions, and therefore reflect the agent’s ability to generalize over these. In all these experiments, a latent space trained with compositional NeRF supervision as the decoder consistently outperformed all other learned representations, both in terms of sample efficiency and asymptotic performance. Furthermore, our proposed framework with compositional NeRF even outperforms the expert keypoint representation. For the door environment, the 3D neural field encoder plus NeRF decoder (NeRF-RL comp. + field) reaches nearly perfect success rates. For the other two environments, the compositional 2D CNN encoder plus NeRF decoder (NeRF-RL comp. + image) was slightly better than with the neural field encoder but not significantly. This shows that the decoder built from compositional NeRF is relevant for the performance, not so much the choice of the encoder. Training the 3D neural field encoder with a contrastive loss as supervision signal for different camera views as positive/negative training pairs is not able to achieve significant learning progress in these scenarios (Multi-CURL). However, the other contrastive baseline, CURL, which has a different encoder and uses image cropping as data augmentation instead of additional camera views, is able to achieve decent performance and sample efficiency on the door environment, but not for the pushing environment. In the mug environment, CURL initially is able to make learning progress comparable to our framework, but never reaches a success rate above 59% and then becomes unstable. Similarly, the global CNN autoencoder baseline shows decent learning progress initially on the mug and pushing scenario (not for the door), but then becomes unstable (mug) or never surpasses 50% success rate (pushing). Such variations in performance or instable learning across the different environments have not been observed with our method, which is stable in all cases. The compositional variant (NeRF-RL comp.) of our framework achieves the highest performance. Since the conv. comp. autoencoder baseline has worse performance than its global variant, compositionality alone is not the sole reason for the better performance of our state representation. Indeed, the global NeRF-RL + image variant in the pushing env. is also better than all other baselines. In the appendix Sec. A.1, we find a positive correlation between NeRF reconstruction quality and RL performance. Furthermore, it turns out that the performance of our framework is not significantly affected when we pretrain the encoder with less data (Sec. A.2). In Sec. A.3, we investigate the influence of the number of input views on the RL performance. In the pushing scenario, only two or even one input view are sufficient for good performance. However, for tasks that require more 3D understanding such as the mug scenario, we observe a drop in performance when reducing the number of views from 4 to 2. 7 Discussion Why NeRF provides better supervision. The NeRF training objective (1) strongly forces each f(·, zj) to represent each object in its actual 3D configuration and relative to other objects in the scene (compositional case), including their shape. This implies that the latent vectors zj have to contain this information, i.e., they are trained to determine the object type, shape and pose in the scene. In the global case, z1 has to represent the geometry of the whole secne. As the tasks we consider require policies to take the geometry of the objects into account, we hypothesize that a latent vector that is capable of parameterizing a NeRF to reconstruct the scene in the 3D space has to contain enough of the relevant 3D information of the objects also for the policy to be successful. Masks. In order for the auto-encoder framework to be compositional, it requires object masks. We believe that instance segmentation has reached a level of maturity [88] that this is a fair assumption to make. As we also utilize the individual masks for the compositional conv. autoencoder and the multi-view CURL baseline, which do not show good performance, it indicates that the masks are not the main reason that our state representation achieves higher performance. This is further supported by the fact that the global NeRF-RL variant which does not rely on individual object masks on the pushing scenario achieved a performance higher than all baselines, i.e., masks will increase the performance of NeRF-RL as they enable the compositional version, but they do not seem essential. Offline/Online. In this work, we focused on pretraining the latent representation offline from a dataset collected by random actions. During RL, the encoder is fixed and only the policy networks are learned. This has the advantage that the same representation can be used for different RL tasks and the dataset to train the representation not necessarily has to come from the same distribution. However, if a policy is needed to explore reasonable regions of the state space, collecting a dataset offline to learn a latent space that covers the state space sufficiently might be more challenging for an offline approach. This was not an issue for our experiments where data collection with random actions was sufficient. Indeed, we show generalization over different starting states of the same environment and with respect to different shapes (within distribution). Future work could investigate NeRF supervision in an online setup. Note that the reconstruction loss via NeRF is computationally more demanding than via a 2D CNN deconv. decoder or a contrastive term, making NeRF supervision as an auxiliary loss at each RL training step costly. One potential solution for this is to apply the auxiliary loss not at every RL training step, but with a lower frequency. Regarding computational efficiency, this is where contrastive learning has an advantage over our proposed NeRF-based decoder, as the encoding with CURL can be trained within half a day, whereas the NeRF auto-encoder took up to 2 days to train for our environments. However, when using the encoder for RL, there is no difference in inference time. Multi-View. The auto-encoder framework we propose can fuse the information of multiple camera views into a latent vector describing an object in the scene. This way, occlusions can be addressed and the agent can gain a better 3D understanding of the scene from the different camera angles. Having access to multiple camera views and their camera matrices is an additional assumption we make, although we believe the capability to utilize this information is an advantage of our method. 8 Conclusion In this work, we have proposed the idea to utilize Neural Radiance Fields (NeRFs) to train latent spaces for RL. Our environments focus on tasks where the geometry of the objects in the scene is relevant for successfully solving the tasks. Training RL agents with the pretrained encoder that maps multiple views of the scene to a latent space consistently outperformed other ways of learning a state representation and even keypoints chosen by expert knowledge. Our results show that the 3D prior present in compositional NeRF as the decoder is more important than priors in the encoder. Broader Impacts. Our main contribution is a method to learn representations that improve the efficiency of vision-based RL, which could impact automation. As such, our work inherits general ethical risks of AI, like the question of how to address the potential of increased automation in society. Acknowledgments The authors thank Russ Tedrake for initial discussions; Jonathan Tompson and Jon Barron for feedback on drafts; Vincent Vanhoucke for encouraging latent NeRFs. This research has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135. Danny Driess thanks the International Max-Planck Research School for Intelligent Systems (IMPRS-IS) for the support. Ingmar Schubert acknowledges support by the German Academic Scholarship Foundation. Yunzhu Li acknowledges support by Amazon.com Services LLC, PO# #2D-06310236 and the Wistron Corporation.
1. What is the focus and contribution of the paper on using Neural Radiance Field representation in reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its originality and quality? 3. What are the weaknesses of the paper regarding its experiments and comparisons with other works? 4. How does the reviewer assess the clarity and significance of the paper's content? 5. Do you have any questions or concerns regarding the sensitivity of the RL performance, the poor performance of the keypoint method, and the limitations of the presented approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper presents a method that uses a Neural Radiance Field representation as a state space for a reinforcement learning algorithm. This approach is evaluated in tasks where multiple views of the scene are available, and where the 3D information present in the NeRF representation is useful for determining shape and pose of objects. The paper explores settings where the NeRF represents the scene globally, and also where a compositional architecture allows representing each object with its own radiance field, and shows improved performance over several baselines. update after author response Thank you to the authors for a diligent and thorough response. I have raised my score. Strengths And Weaknesses Originality The major contribution of the paper is using NeRFs as a representation specifically in RL, and although this is not a surprising combination of methods, it is the first paper I am aware of that uses RL on top of the NeRF representation. The paper cites many similar approaches which use NeRFs in dynamics models which are then used for planning (though I believe the authors may want to additionally cite [Li, Li, Sitzmann, Agrawal, and Torralba. 3D Neural Scene Representations for Visuomotor Control. CoRL 2021]). The substitution of RL for planning is a small contribution. Quality The paper does well to address different methods for using NeRFs as a representation, both global and compositional, and has a thorough list of baselines for comparison. The focus is on highlighting the advantages of using the 3D aware representations that come from a NeRF. However, the tasks are specifically designed to increase the likelihood that this 3D information is useful, and discussion of how sensitive the approach is to those assumptions is not addressed. Clarity The paper is well written and the method is described with enough background to give a self-contained understanding of the approach. There are some issues and minor typos: Line 139 supevision -> supervision Line 188 then -> them In description of the CURL baseline, the text mentions carefully choosing views, but it is not clear how this is done. Significance This paper presents a small but interesting contribution. As the authors mention, it is unclear what realistic situations will actually provide multiple viewpoints of the type given in the simulations here, but this is a first glimpse to suggest that NeRFs can help. Questions I would have liked to see more discussion of the sensitivity of the RL performance to, say, the data available to train the NeRF model. What sort of coverage of the scene by input views is required? I am surprised at the poor performance of the keypoint method. I would guess that appropriately chosen 3D keypoint data should represent an upper bound of possible performance for the NeRF method. What sort of limitation does the keypoint method have here that the NeRF method outperforms it? In the door case, for example, what information besides the keypoints would the NeRF represent? Limitations The authors are responsible about pointing out the assumptions they make and limitations of the presented approach, and discuss these assumptions and limitations in an up-front way.
NIPS
Title Reinforcement Learning with Neural Radiance Fields Abstract It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to train an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for the downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors. Video: https://dannydriess.github.io/nerf-rl 1 Introduction The sample efficiency of reinforcement learning (RL) algorithms crucially depends on the representation of the underlying system state they operate on [1, 2, 3, 4, 5, 6, 7]. Sometimes, a low-dimensional (direct) representation of the state, such as the positions of the objects in the environment, is considered to make the resulting RL problem most efficient [2]. However, such low-dimensional, direct state representations can have several disadvantages. On the one hand, a perception module, e.g., pose estimation, is necessary in the real world to obtain the representation from raw observations, which often is difficult to achieve in practice with sufficient robustness. On the other hand, if the goal is to learn policies that generalize over different object shapes [8], using a low-dimensional state representation is often impractical. Such scenarios, while challenging for RL, are common, e.g., in robotic manipulation tasks. Therefore, there is a large history of approaches that consider RL directly from raw, high-dimensional observations like images (e.g., [9, 10]). Typically, an encoder takes the high-dimensional input and maps it to a low-dimensional latent representation of the state. The RL algorithm (e.g., the Q-function or the policy network) then operates on the latent vector as state input. This way, no separate perception module is necessary, the framework can extract information from the raw observations that are relevant for the task, and the RL agent, in principle, may generalize over challenging environments, in which, e.g., object shapes are varied. While these are advantages in principle, jointly training encoders capable of processing high-dimensional inputs from the RL signal alone is challenging. To address this, one approach is to pretrain the encoder on a different task, e.g., image reconstruction [1, 4, 11], multi-view consistency [6], or a time-constrastive task [3]. Alternatively, an auxiliary loss on the latent encoding can be added during the RL procedure [5]. In both cases, the choice of the actual (auto-)encoder architecture and associated (auxiliary) loss function has a significant influence on the usefulness of the resulting latent space for the downstream ∗equal contribution. Correspondence: [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). RL task. Especially for image data, convolutional neural networks (CNNs) are commonly used for the encoder [12]. However, 2D CNNs have a 2D (equivariance) bias, while for many RL tasks, the 3D structure of our world is essential. Architectures like Vision Transformers [13, 14] process images with no such direct 2D bias, but they often require large scale data, which might be challenging in RL applications. Additionally, although multiple uncalibrated 2D image inputs can be used with generic image encoders [15], they do not benefit from 3D inductive biases, which may help for example in resolving ambiguities in 2D images such as occlusions and object permanence. Recently, Neural Radiance Fields (NeRFs) [16] have shown great success in learning to represent scenes with a neural network that enables to render the scene from novel viewpoints, and have sparked broad interest in computer vision [17]. NeRFs exhibit a strong 3D inductive bias, leading to better scene reconstruction capabilities than methods composed of generic image encoders (e.g., [18]). In the present work, we investigate whether incorporating these 3D inductive biases of NeRFs into learning a state representation can benefit RL. Specifically, we propose to train an encoder that maps multiple RGB image views of the scene to a latent representation through an auto-encoder structure, where a (compositional) NeRF decoder provides the self-supervision signal using an image reconstruction loss for each view. In the experiments, we show for multiple environments that supervision from NeRF leads to a latent representation that makes the downstream RL procedure more sample efficient compared to supervision via a 2D CNN decoder, a contrastive loss on the latent space, or even hand-engineered, perfect low-level state information given as keypoints. Commonly, RL is trained on environments where the objects have the same shape. Our environments include hanging mugs on hooks, pushing objects on a table, and a door opening scenario. In all of these, the objects’ shapes are not fixed, and we require the agent to generalize over all shapes from a distribution. To summarize our main contributions: (i) we propose to train state representations for RL with NeRF supervision, and (ii) we empirically demonstrate that an encoder trained with a latent-conditioned NeRF decoder, especially with an object-compositional NeRF decoder, leads to increased RL performance relative to standard 2D CNN auto-encoders, contrastive learning, or expert keypoints. 2 Related Work Neural Scene/Object Representations in Computer Vision, and Applications. To our knowledge, the present work is the first to explore if neural scene representations like NeRFs can benefit RL. Outside of RL, however, there has been a very active research field in the area of neural scene representations, both in the representations themselves [19, 20, 21, 22] and their applications; see [23, 24, 17] for recent reviews. Within the family of NeRFs and related methods, major thrusts of research have included: improving modeling formulations [25, 26], modeling larger scenes [26, 27], addressing (re-)lighting [28, 29, 30], and an especially active area of research has been in improving speed, both of training and of inference-time rendering [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In our case, we are not constrained by inference-time computation issues, since we do not need to render images, and only have to run our latent-space encoder (with a runtime of approx. 7 ms on an RTX3090). Additionally of particular relevance, various methods have developed latent-conditioned [42, 43, 44] or compositional/object-oriented approaches for NeRFs [45, 46, 47, 48, 49, 50, 51, 52, 53], although they, nor other NeRF-style methods to our knowledge, have been applied to RL. Neural scene representations have found application across many fields (i.e., augmented reality and medical imaging [54]) and both NeRFs [55, 56, 57, 58] and other neural scene approaches [59, 60, 61, 62] have started to be used for various problems in robotics, including pose estimation [55], trajectory planning [56], visual foresight [11, 53], grasping [59, 57], and rearrangement tasks [60, 61, 58]. Learning State Representations for Reinforcement Learning. One of the key enabling factors for the success of deep RL is its ability to find effective representations of the environment from high-dimensional observation data [10, 63]. Extensive research has gone into investigating different ways to learn better state representations using various auxiliary objective functions. Contrastive learning is a common objective and has shown success in unsupervised representation learning in computer vision applications [64, 65]. Researchers built upon this success and have shown such learning objectives can lead to better performance and sample efficiency in deep RL [66, 67], where the contrasting signals could come from time alignment [68, 3], camera viewpoints [69], and different sensory modalities [70], with applications in real-world robotic tasks [6, 71]. Extensive efforts have investigated the role of representation learning in RL [72], provided a detailed analysis of the importance of different visual representation pretraining methods [73], and shown how we can improve training stability in the face of multiple auxiliary losses [74]. There is also a range of additional explorations on pretraining methods with novel objective functions (e.g., bisimulation metrics [75] and temporal cycle-consistency loss [76]) and less-explored data sources (e.g., in-thewild images [77] and action-free videos [78]). Please check the survey for more related work in this direction [79]. Our method is different in that we explicitly utilize a decoder that includes strong 3D inductive biases provided by NeRFs, which we empirically show improves RL for tasks that depend on the geometry of the objects. 3 Background 3.1 Reinforcement Learning This work considers decision problems that can be described as discrete-time Markov Decision Processes (MDPs) M = ⟨S,A, T, γ,R, P0⟩. S and A are the sets of all states and actions, respectively. The transition probability (density) from s to s′ using an action a is T (s′ | s, a). The agent receives a real-valued reward R(s, a, s′) after each step. The discount factor γ ∈ [0, 1) trades off immediate and future rewards. P0 : S → R+0 is the distribution of the start state. RL algorithms try to find the optimal policy π∗ : S × A → R+0 , where π∗ = argmaxπ ∑∞ t=0 γ tEst+1∼T (·|st,at), at∼π(·|st),s0∼P0 [R(st, at, st+1)] . Importantly, in this work, we consider RL problems where the state s encodes both the position and the shape of the objects in the scene. We require the RL agent to generalize over all of these shapes at test time. We can therefore think of the state as a tuple s = (sp, ss), where sp encodes positional information, and ss encodes the shapes involved. We focus the experiments on sparse reward settings, meaning R(s, a, s′) = R0 > 0 for s′ ∈ Sg and R(s, a, s′) = 0 for s ∈ S\Sg, where the volume of Sg ⊂ S is much smaller than the volume of S. The state space S usually is low-dimensional or a minimal description of the degrees of freedom of the system. In this work, we consider that the RL algorithm has only access to a (high-dimensional) observation y ∈ Y of the scene (e.g., RGB images). In particular, this means that the policy has observations as input a ∼ π(· | y). Since we assume that the underlying state s = (sp, ss) is fully observable from y, we can treat y like a state for an MDP. Reinforcement Learning with Learned Latent Scene Representations. The general idea of RL with learned latent scene representations is to learn an encoder Ω that maps an observation y ∈ Y to a k-dimensional latent vector z = Ω(y) ∈ Z ⊂ Rk of the scene. The actual RL components, e.g., the Q-function or policy, then operate on z as its state description. For a policy π, this means that the action a ∼ π(· | z) = π(· | Ω(y)) is conditional on the latent vector z instead of the observation y directly. The dimension k of the latent vector is typically (much) smaller than that of the observation space Y , but larger than that of the state space S. 3.2 Neural Radiance Fields (NeRFs) The general idea of NeRF, originally proposed by [16], is to learn a function f = (σ, c) that predicts the emitted RGB color value c(x) ∈ R3 and volume density σ(x) ∈ R≥0 at any 3D world coordinate x ∈ R3. Based on f , an image from an arbitrary view and camera parameters can be rendered by computing the color C(r) ∈ R3 of each pixel along its corresponding camera ray r(α) = r(0) + αd through the volumetric rendering relation C(r) = ∫ αf αn Tf (r, α)σ(r(α))c(r(α)) dα with Tf (r, α) = exp ( − ∫ α αn σ(r(u)) du ) . (1) Here, r(0) ∈ R3 is the camera origin, d ∈ R3 the pixel dependent direction of the ray and αn, αf ∈ R the near and far bounds within which objects are expected, respectively. The camera rays are determined from the camera matrix K (intrinsics and extrinsics) describing the desired view. 4 Learning State Representations for RL with NeRF Supervision This section describes our proposed framework, in which we use a latent state space for RL that is learned from NeRF supervision. For learning the latent space, we use an encoder-decoder where the Latent-conditioned Compositional NeRF Decoder decoder is a latent-conditioned NeRF, which may either be a global [42, 43, 44] or a compositional NeRF decoder [53]. To our knowledge, no prior work has used such NeRF-derived supervision for RL. In Sec. 4.1 we describe this proposition, Sec. 4.2 provides an overview of the encoder-decoder training, Sec. 4.3 and Sec. 4.4 introduce options for the NeRF decoder and encoder, respectively. 4.1 Using Latent-Conditioned NeRF for RL We propose the state representation z on which an RL algorithm operates to be a latent vector produced by an encoder that maps images from multiple views to a latent z, which is trained with a (compositional) latent-conditioned NeRF decoder. As will be verified in experiments, we hypothesize that this framework is beneficial for the downstream RL task, as it produces latent vectors that represent the actual 3D geometry of the objects in the scene, can handle multiple objects well, as well as fuse multiple views in a consistent way to deal with occlusions by providing shape completion, all of which is relevant to solve tasks where the geometry is important. There are two steps to our framework, as shown in Fig. 1. First, we train the encoder + decoder from a dataset collected by random interactions with the environment, i.e., we do not yet need a trained policy. Second, we take the encoder trained in the first step, which we leave frozen, and use the latent space to train an RL policy. Note that we investigate two variants of the auto-encoder framework, a global one, where the whole scene is represented by one single latent vector, and a compositional one, where objects are represented by their own latent vector. For the latter, objects are identified by masks in the views. 4.2 Overview: Auto-Encoder with Latent-Conditioned NeRF Decoder Assume that an observation y = ( I1:V ,K1:V ,M1:V ) of the scene consists of RGB images Ii ∈ R3×h×w, i = 1, . . . , V taken from V many camera views, their respective camera projection matrices Ki ∈ R3×4 (including both intrinsics and extrinsics), and per-view image masks M1:V . For a global NeRF decoder, these are global non-background masks M itot ∈ {0, 1}h×w, and for a compositional NeRF decoder as in [53], these are sets of binary masks M ij ∈ {0, 1} h×w that identify the objects j = 1, . . . ,m in the scene in view i. The global case is equivalent to m = 1, M ij=1 = M i tot. The encoder Ω maps these posed image observations from the multiple views into a set of latent vectors z1:m, where each zj represents each object in the scene separately in the compositional case, or the single z1 all objects in the scene. This is achieved by querying Ω on the masks M1:Vj , i.e., zj = Ω ( I1:V ,K1:V ,M1:Vj ) ∈ Rk (2) for object j. The supervision signal to train the encoder is the image reconstruction loss Li = ∥∥Ii ◦M itot −D (Ω (I1:V ,K1:V ,M1:V1:m) ,Ki)∥∥22 (3) on the input view i where the decoder D renders an image I = D(z1:m,K) for arbitrary views specified by the camera matrix K from the set of latent vectors z1:m. Both the encoder and decoder are trained end-to-end at the same time. The target images for the decoder are the same in both the global and compositional case: the global-masked image Ii ◦M itot (◦ is the element-wise product). In the compositional case this can be computed with M itot = ∨m j=1 M i j . By fusing the information from multiple views of the objects into the latent vector from which the decoder has to be able to render the scene from multiple views, this auto-encoder framework can learn latent vectors that represent the 3D configurations (shape and pose) of the objects in the scene. 4.3 Latent-Conditioned NeRF Decoder Details Global. The original NeRF formulation [16] learns a fully connected network f that represents one single scene (Sec. 3.2). In order to create a decoder from NeRFs within an auto-encoder to learn a latent space, we condition the NeRF f(·, z) on the latent vector z ∈ Rk [42, 43, 44]. While approaches such as [42, 43, 44] use the latent code to represent factors such as lighting or categorylevel generalization, in our case the latent code is intended to represent the scene variation, i.e., shape and configuration of objects, such that a downstream RL agent may use this as a state representation. Compositional. In the compositional case, the encoder produces a set of latent vectors z1:m describing each object j = 1, . . . ,m individually, this leads to m many NeRFs (σj(x), cj(x)) = fj(x) = f(x, zj), j = 1, . . . ,m with their associated volume density σj and color value cj . Note that while one could use different networks fj with their own network weights for each object, we have a single network f for all objects. This means that both the object’s pose as well as its shape and type are represented through the latent code zj . In order to force those conditioned NeRFs to learn the 3D configuration of each object separately, we compose them into a global NeRF model with the composition formulas (proposed e.g., by [80, 81]): σ(x) = ∑m j=1 σj(x), c(x) = 1σ(x) ∑m j=1 σj(x)cj(x). As this composition happens in 3D space, the latent vectors will be learned such that they correctly represent the actual shape and pose of the objects in the scene with respect to the other objects, which we hypothesize may be useful for the downstream RL agent. 4.4 Encoder Details The encoder Ω operates by fusing multiple views together to estimate the latent vector for the RL task. Since the scientific question of this work is to investigate whether a decoder built from NeRFs to train the encoder end-to-end is beneficial for RL, we consider two different encoder architectures. The first one is a 2D CNN that averages feature encodings from the different views, where each encoding is additionally conditioned on the camera matrix of that view. The second one is based on a learned 3D neural vector field that incorporates 3D biases by fusing the different camera views in 3D space through 3D convolutions and camera projection. This way, we are able to distinguish between the importance of 3D priors incorporated into the encoder versus the decoder. Per-image CNN Encoder (“Image encoder”). For the global version, we utilize the network architecture from [11] as an encoder choice. In order to work with multiple objects in the compositional case, we modify the architecture from [11] by taking the object masks into account as follows. For each object j, the 2D CNN encoder computes zj = ΩCNN ( I1:V ,K1:V ,M1:Vj ) = hMLP ( 1 V V∑ i=1 gMLP ( ECNN ( Ii ◦M ij ) ,Ki )) . (4) ECNN is a ResNet-18 [82] CNN feature extractor that determines a feature from the masked input image Ii ◦M ij of object j for each view i, which is then concatenated with the (flattened) camera matrix. The output of the network gMLP is hence the encoding of each view, including the camera information, which is averaged and then processed with hMLP, to produce the final latent vector. Note that in the global case, we set m = 1, M ij=1 = M i tot such that ΩCNN produces a single latent vector. Neural Field 3D CNN Encoder (“Field encoder”). Several authors [43] have considered to incorporate 3D biases into learning an encoder by computing pixel-aligned features from queried 3D locations of the scene to fuse the information from the different camera views directly in 3D space. We utilize the encoder architecture from [53], where the idea is to learn a neural vector field ϕ [ I1:V ,M1:Vj ] : R3 → RE over 3D space, conditioned on the input views and masks. The features of ϕ are computed from projecting the query point into the camera coordinate system from the respective view. To turn ϕ into a latent vector, it is queried on a workspace set Xh ∈ RdX×hX×wX (a 3D grid) and then processed by a 3D convolutional network, i.e., zj = E3D CNN ( ϕ [ I1:V ,M1:Vj ] (Xh) ) . This method differs from [43, 83, 60] by computing a latent vector from the pixel-aligned features. 5 Baselines / Alternative State Representations In this section, we briefly describe alternative ways of training an encoder for RL, which we will investigate in the experiments as baselines and ablations. For details, refer to the appendix. Conv. Autoencoder. This baseline uses a standard CNN decoder based on deconvolutions instead of NeRF to reconstruct the image from the latent representation, similar to [1]. Therefore, with this baseline we investigate the influence of the NeRF decoder relative to CNN decoders. We follow the architecture of [11] for the deconvolution part for the global case. In the compositional case, we modify the architecture to be able to deal with a set of individual latent vectors instead of a single, global one. The image I = Ddeconv(gMLP( 1m ∑m j=1 zj),K) is rendered from z1:m by first averaging the latent vectors and then processing the averaged vector with a fully connected network gMLP, leading to an aggregated feature. This aggregated feature is concatenated with the (flattened) camera matrix K describing the desired view and then rendered into the image with Ddeconv. In the experiments, we utilize this decoder as the supervision signal to train the latent space produced by the 2D CNN encoder from Sec. 4.4. In the compositional version, the 2D CNN encoder (4) use the same object masks as the compositional NeRF-RL variant. Contrastive Learning. As an alternative to learning an encoder via a reconstruction loss, the idea of contrastive learning [84] is to define a loss function directly on the latent space that tries to pull latent vectors describing the same configurations together (called positive samples) while ones representing different system states apart (called negative samples). A popular approach to achieve this is with the InfoNCE loss [85, 64]. Let yi and ỹi be two different observations of the same state. Here, ·̃ denotes a perturbed/augmented version of the observation. For a mini-batch of observations {(yi, ỹi)}ni=1, after encoding those into their respective latent vectors zi = Ω(yi), z̃i = Ω(ỹi) with the encoder Ω, the loss for that batch would use (zi, z̃i) as a positive pair, and (zi,z̸̃=i) as a negative pair, or some similar variation. A crucial question in contrastive learning is how the observation y is perturbed/augmented into ỹ to generate positive and negative training pairs, described in the following. CURL. In CURL [5], the input image is randomly cropped to generate y and ỹ. We closely follow the hyperparameters and design of [5]. CURL operates on a single input view and we choose a view for this baseline from which the state of the environment can be inferred as best as possible (Fig. 17). Multi-View CURL. This baseline investigates if the neural field 3D encoder (Sec. 4.4) can be trained with a contrastive loss. As this encoder operates on multiple input views we double the number of available camera views. Half of the views are the same as in the other experiments, the other half are captured from sightly perturbed camera angles. We use the same loss as CURL, but with different contrastive pairs – rather than from augmentation, the contrastive style is taken from TCN [68]: the positive pairs come from different views but at the same moment in time, while negative pairs come from different times. Therefore, this baseline can be seen as a multi-view adaptation of CURL [5]. Direct State / Keypoint Representations. Finally, we also consider a direct, low-dimensional representation of the state. Since we are interested in generalizing over different object shapes, we consider multiple 3D keypoints that are attached at relevant locations of the objects by expert knowledge and observed with a perfect keypoint detector [8]. See Fig. 2b for a visualization of those keypoints. The keypoints both provide information about object shape and its pose. Furthermore, as seen in Fig. 2b, they have been chosen to reflect those locations in the environment relevant to solve the task. Additionally, we report results where the state is represented by the poses of the objects – as this cannot represent object shape, in this case we use a constant object shape for training and test. 6 Experiments We evaluate our proposed method on different environments where the geometry of the objects in the scene is important to solve the task successfully. Please also refer to the video https://dannydriess.github.io/nerf-rl. Commonly, RL is trained and evaluated on a single environment, where only the poses are changed, but the involved object shapes are kept constant. Since latent-conditioned NeRFs have been shown to be capable of generalizing over geometry [43], we consider experiments where we require the RL agent to generalize over object shapes within some distribution. Answering the scientific question of this work requires environments with multi-view observations — and for the compositional versions object masks as well. These are not provided in standard RL benchmarks, which is the reason for choosing the environments investigated in this work. We use PPO [86] as the RL algorithm and four camera views in all experiments. Refer to the appendix for more details about our environments, parameter choices, network architectures, and training times. 6.1 Environments Mug on Hook. In this environment, adopted from [87] and visualized in Fig. 2b, the task is to hang a mug on a hook. Both the mug and the hook shape are randomized. The actions are small 3D translations applied to the mug. This environment is challenging as we require the RL agent to generalize over mug and hook shapes and the tolerance between the handle opening and the hook is relatively small. Further, the agent receives a sparse reward only if the mug has been hung stably. This reward is calculated by virtually simulating a mug drop after each action. If the mug does not fall onto the ground from the current state, a reward of one is assigned, otherwise zero. Planar Pushing. The task in this environment, shown in Fig. 3b, is to push yellow box-shaped objects into the left region of the table and blue objects into the right region with the red pusher that can move in the plane, i.e., the action is two dimensional. This is the same environment as in [53] with the same four different camera views. Each run contains a single object on the table (plus the pusher). If the box has been pushed inside its respective region, a sparse reward of one is received, otherwise zero. The boxes in the environment have different sizes, two colors and are randomly initialized. In this environment, we cannot use keypoints for the multi-shape setting, as the reward depends on the object color; we evaluate the keypoints baseline only in the single shape case (Appendix). Door Opening. Fig. 4b shows the door environment, where the task is to open a sliding door with the red end-effector that can be translated in 3 DoFs as the action. To solve this task, the agent has to push on the door handle. As the handle position and size is randomized, the agent has to learn to interact with the handle geometry accordingly. Interestingly, as can be seen in the video in the supplementary material, the agent often chooses to push on the handle only at the beginning, as, afterwards, it is sufficient to push the door itself at its side. The agent receives a sparse reward if the door has been opened sufficiently, otherwise, zero reward is assigned. 6.2 Results Figs 2a, 3a, 4a show success rates (averaged over 6 independent experiment repetitions and over 30 test rollouts per repetition per timestep) as a function of training steps. Also shown are the 68% confidence intervals. These success rates have been evaluated using randomized object shapes and initial conditions, and therefore reflect the agent’s ability to generalize over these. In all these experiments, a latent space trained with compositional NeRF supervision as the decoder consistently outperformed all other learned representations, both in terms of sample efficiency and asymptotic performance. Furthermore, our proposed framework with compositional NeRF even outperforms the expert keypoint representation. For the door environment, the 3D neural field encoder plus NeRF decoder (NeRF-RL comp. + field) reaches nearly perfect success rates. For the other two environments, the compositional 2D CNN encoder plus NeRF decoder (NeRF-RL comp. + image) was slightly better than with the neural field encoder but not significantly. This shows that the decoder built from compositional NeRF is relevant for the performance, not so much the choice of the encoder. Training the 3D neural field encoder with a contrastive loss as supervision signal for different camera views as positive/negative training pairs is not able to achieve significant learning progress in these scenarios (Multi-CURL). However, the other contrastive baseline, CURL, which has a different encoder and uses image cropping as data augmentation instead of additional camera views, is able to achieve decent performance and sample efficiency on the door environment, but not for the pushing environment. In the mug environment, CURL initially is able to make learning progress comparable to our framework, but never reaches a success rate above 59% and then becomes unstable. Similarly, the global CNN autoencoder baseline shows decent learning progress initially on the mug and pushing scenario (not for the door), but then becomes unstable (mug) or never surpasses 50% success rate (pushing). Such variations in performance or instable learning across the different environments have not been observed with our method, which is stable in all cases. The compositional variant (NeRF-RL comp.) of our framework achieves the highest performance. Since the conv. comp. autoencoder baseline has worse performance than its global variant, compositionality alone is not the sole reason for the better performance of our state representation. Indeed, the global NeRF-RL + image variant in the pushing env. is also better than all other baselines. In the appendix Sec. A.1, we find a positive correlation between NeRF reconstruction quality and RL performance. Furthermore, it turns out that the performance of our framework is not significantly affected when we pretrain the encoder with less data (Sec. A.2). In Sec. A.3, we investigate the influence of the number of input views on the RL performance. In the pushing scenario, only two or even one input view are sufficient for good performance. However, for tasks that require more 3D understanding such as the mug scenario, we observe a drop in performance when reducing the number of views from 4 to 2. 7 Discussion Why NeRF provides better supervision. The NeRF training objective (1) strongly forces each f(·, zj) to represent each object in its actual 3D configuration and relative to other objects in the scene (compositional case), including their shape. This implies that the latent vectors zj have to contain this information, i.e., they are trained to determine the object type, shape and pose in the scene. In the global case, z1 has to represent the geometry of the whole secne. As the tasks we consider require policies to take the geometry of the objects into account, we hypothesize that a latent vector that is capable of parameterizing a NeRF to reconstruct the scene in the 3D space has to contain enough of the relevant 3D information of the objects also for the policy to be successful. Masks. In order for the auto-encoder framework to be compositional, it requires object masks. We believe that instance segmentation has reached a level of maturity [88] that this is a fair assumption to make. As we also utilize the individual masks for the compositional conv. autoencoder and the multi-view CURL baseline, which do not show good performance, it indicates that the masks are not the main reason that our state representation achieves higher performance. This is further supported by the fact that the global NeRF-RL variant which does not rely on individual object masks on the pushing scenario achieved a performance higher than all baselines, i.e., masks will increase the performance of NeRF-RL as they enable the compositional version, but they do not seem essential. Offline/Online. In this work, we focused on pretraining the latent representation offline from a dataset collected by random actions. During RL, the encoder is fixed and only the policy networks are learned. This has the advantage that the same representation can be used for different RL tasks and the dataset to train the representation not necessarily has to come from the same distribution. However, if a policy is needed to explore reasonable regions of the state space, collecting a dataset offline to learn a latent space that covers the state space sufficiently might be more challenging for an offline approach. This was not an issue for our experiments where data collection with random actions was sufficient. Indeed, we show generalization over different starting states of the same environment and with respect to different shapes (within distribution). Future work could investigate NeRF supervision in an online setup. Note that the reconstruction loss via NeRF is computationally more demanding than via a 2D CNN deconv. decoder or a contrastive term, making NeRF supervision as an auxiliary loss at each RL training step costly. One potential solution for this is to apply the auxiliary loss not at every RL training step, but with a lower frequency. Regarding computational efficiency, this is where contrastive learning has an advantage over our proposed NeRF-based decoder, as the encoding with CURL can be trained within half a day, whereas the NeRF auto-encoder took up to 2 days to train for our environments. However, when using the encoder for RL, there is no difference in inference time. Multi-View. The auto-encoder framework we propose can fuse the information of multiple camera views into a latent vector describing an object in the scene. This way, occlusions can be addressed and the agent can gain a better 3D understanding of the scene from the different camera angles. Having access to multiple camera views and their camera matrices is an additional assumption we make, although we believe the capability to utilize this information is an advantage of our method. 8 Conclusion In this work, we have proposed the idea to utilize Neural Radiance Fields (NeRFs) to train latent spaces for RL. Our environments focus on tasks where the geometry of the objects in the scene is relevant for successfully solving the tasks. Training RL agents with the pretrained encoder that maps multiple views of the scene to a latent space consistently outperformed other ways of learning a state representation and even keypoints chosen by expert knowledge. Our results show that the 3D prior present in compositional NeRF as the decoder is more important than priors in the encoder. Broader Impacts. Our main contribution is a method to learn representations that improve the efficiency of vision-based RL, which could impact automation. As such, our work inherits general ethical risks of AI, like the question of how to address the potential of increased automation in society. Acknowledgments The authors thank Russ Tedrake for initial discussions; Jonathan Tompson and Jon Barron for feedback on drafts; Vincent Vanhoucke for encouraging latent NeRFs. This research has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135. Danny Driess thanks the International Max-Planck Research School for Intelligent Systems (IMPRS-IS) for the support. Ingmar Schubert acknowledges support by the German Academic Scholarship Foundation. Yunzhu Li acknowledges support by Amazon.com Services LLC, PO# #2D-06310236 and the Wistron Corporation.
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of sample efficiency? 3. What are the weaknesses of the paper, especially regarding its generalization ability to real-world environments? 4. Do you have any concerns or suggestions regarding the limitations of the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The authors proposed the first learned state representation through NeRF supervision for reinforcement learning purposes. The proposed method shows that an offline trained latent-conditioned NeRF comes with a learned state representation that can be beneficial for training an RL policy online more sample efficiently. The final results on three simulated environments are impressive compared to other learned representations and a key-points representation selected by human experts. Strengths And Weaknesses Pros: the first learned state representation through NeRF supervision for reinforcement learning purposes. the sample efficiency has greatly improved compared to other learned representations and a key-points representation selected by human experts. Cons: Only simplified simulation environments are used rather than real-world environments rendered by NeRF. Questions Any thoughts on how the proposed method can be generalized to real-world environments? Limitations I didn't find a clear section discussing the limitation.
NIPS
Title Reinforcement Learning with Neural Radiance Fields Abstract It is a long-standing problem to find effective representations for training reinforcement learning (RL) agents. This paper demonstrates that learning state representations with supervision from Neural Radiance Fields (NeRFs) can improve the performance of RL compared to other learned representations or even low-dimensional, hand-engineered state information. Specifically, we propose to train an encoder that maps multiple image observations to a latent space describing the objects in the scene. The decoder built from a latent-conditioned NeRF serves as the supervision signal to learn the latent space. An RL algorithm then operates on the learned latent space as its state representation. We call this NeRF-RL. Our experiments indicate that NeRF as supervision leads to a latent space better suited for the downstream RL tasks involving robotic object manipulations like hanging mugs on hooks, pushing objects, or opening doors. Video: https://dannydriess.github.io/nerf-rl 1 Introduction The sample efficiency of reinforcement learning (RL) algorithms crucially depends on the representation of the underlying system state they operate on [1, 2, 3, 4, 5, 6, 7]. Sometimes, a low-dimensional (direct) representation of the state, such as the positions of the objects in the environment, is considered to make the resulting RL problem most efficient [2]. However, such low-dimensional, direct state representations can have several disadvantages. On the one hand, a perception module, e.g., pose estimation, is necessary in the real world to obtain the representation from raw observations, which often is difficult to achieve in practice with sufficient robustness. On the other hand, if the goal is to learn policies that generalize over different object shapes [8], using a low-dimensional state representation is often impractical. Such scenarios, while challenging for RL, are common, e.g., in robotic manipulation tasks. Therefore, there is a large history of approaches that consider RL directly from raw, high-dimensional observations like images (e.g., [9, 10]). Typically, an encoder takes the high-dimensional input and maps it to a low-dimensional latent representation of the state. The RL algorithm (e.g., the Q-function or the policy network) then operates on the latent vector as state input. This way, no separate perception module is necessary, the framework can extract information from the raw observations that are relevant for the task, and the RL agent, in principle, may generalize over challenging environments, in which, e.g., object shapes are varied. While these are advantages in principle, jointly training encoders capable of processing high-dimensional inputs from the RL signal alone is challenging. To address this, one approach is to pretrain the encoder on a different task, e.g., image reconstruction [1, 4, 11], multi-view consistency [6], or a time-constrastive task [3]. Alternatively, an auxiliary loss on the latent encoding can be added during the RL procedure [5]. In both cases, the choice of the actual (auto-)encoder architecture and associated (auxiliary) loss function has a significant influence on the usefulness of the resulting latent space for the downstream ∗equal contribution. Correspondence: [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). RL task. Especially for image data, convolutional neural networks (CNNs) are commonly used for the encoder [12]. However, 2D CNNs have a 2D (equivariance) bias, while for many RL tasks, the 3D structure of our world is essential. Architectures like Vision Transformers [13, 14] process images with no such direct 2D bias, but they often require large scale data, which might be challenging in RL applications. Additionally, although multiple uncalibrated 2D image inputs can be used with generic image encoders [15], they do not benefit from 3D inductive biases, which may help for example in resolving ambiguities in 2D images such as occlusions and object permanence. Recently, Neural Radiance Fields (NeRFs) [16] have shown great success in learning to represent scenes with a neural network that enables to render the scene from novel viewpoints, and have sparked broad interest in computer vision [17]. NeRFs exhibit a strong 3D inductive bias, leading to better scene reconstruction capabilities than methods composed of generic image encoders (e.g., [18]). In the present work, we investigate whether incorporating these 3D inductive biases of NeRFs into learning a state representation can benefit RL. Specifically, we propose to train an encoder that maps multiple RGB image views of the scene to a latent representation through an auto-encoder structure, where a (compositional) NeRF decoder provides the self-supervision signal using an image reconstruction loss for each view. In the experiments, we show for multiple environments that supervision from NeRF leads to a latent representation that makes the downstream RL procedure more sample efficient compared to supervision via a 2D CNN decoder, a contrastive loss on the latent space, or even hand-engineered, perfect low-level state information given as keypoints. Commonly, RL is trained on environments where the objects have the same shape. Our environments include hanging mugs on hooks, pushing objects on a table, and a door opening scenario. In all of these, the objects’ shapes are not fixed, and we require the agent to generalize over all shapes from a distribution. To summarize our main contributions: (i) we propose to train state representations for RL with NeRF supervision, and (ii) we empirically demonstrate that an encoder trained with a latent-conditioned NeRF decoder, especially with an object-compositional NeRF decoder, leads to increased RL performance relative to standard 2D CNN auto-encoders, contrastive learning, or expert keypoints. 2 Related Work Neural Scene/Object Representations in Computer Vision, and Applications. To our knowledge, the present work is the first to explore if neural scene representations like NeRFs can benefit RL. Outside of RL, however, there has been a very active research field in the area of neural scene representations, both in the representations themselves [19, 20, 21, 22] and their applications; see [23, 24, 17] for recent reviews. Within the family of NeRFs and related methods, major thrusts of research have included: improving modeling formulations [25, 26], modeling larger scenes [26, 27], addressing (re-)lighting [28, 29, 30], and an especially active area of research has been in improving speed, both of training and of inference-time rendering [31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41]. In our case, we are not constrained by inference-time computation issues, since we do not need to render images, and only have to run our latent-space encoder (with a runtime of approx. 7 ms on an RTX3090). Additionally of particular relevance, various methods have developed latent-conditioned [42, 43, 44] or compositional/object-oriented approaches for NeRFs [45, 46, 47, 48, 49, 50, 51, 52, 53], although they, nor other NeRF-style methods to our knowledge, have been applied to RL. Neural scene representations have found application across many fields (i.e., augmented reality and medical imaging [54]) and both NeRFs [55, 56, 57, 58] and other neural scene approaches [59, 60, 61, 62] have started to be used for various problems in robotics, including pose estimation [55], trajectory planning [56], visual foresight [11, 53], grasping [59, 57], and rearrangement tasks [60, 61, 58]. Learning State Representations for Reinforcement Learning. One of the key enabling factors for the success of deep RL is its ability to find effective representations of the environment from high-dimensional observation data [10, 63]. Extensive research has gone into investigating different ways to learn better state representations using various auxiliary objective functions. Contrastive learning is a common objective and has shown success in unsupervised representation learning in computer vision applications [64, 65]. Researchers built upon this success and have shown such learning objectives can lead to better performance and sample efficiency in deep RL [66, 67], where the contrasting signals could come from time alignment [68, 3], camera viewpoints [69], and different sensory modalities [70], with applications in real-world robotic tasks [6, 71]. Extensive efforts have investigated the role of representation learning in RL [72], provided a detailed analysis of the importance of different visual representation pretraining methods [73], and shown how we can improve training stability in the face of multiple auxiliary losses [74]. There is also a range of additional explorations on pretraining methods with novel objective functions (e.g., bisimulation metrics [75] and temporal cycle-consistency loss [76]) and less-explored data sources (e.g., in-thewild images [77] and action-free videos [78]). Please check the survey for more related work in this direction [79]. Our method is different in that we explicitly utilize a decoder that includes strong 3D inductive biases provided by NeRFs, which we empirically show improves RL for tasks that depend on the geometry of the objects. 3 Background 3.1 Reinforcement Learning This work considers decision problems that can be described as discrete-time Markov Decision Processes (MDPs) M = ⟨S,A, T, γ,R, P0⟩. S and A are the sets of all states and actions, respectively. The transition probability (density) from s to s′ using an action a is T (s′ | s, a). The agent receives a real-valued reward R(s, a, s′) after each step. The discount factor γ ∈ [0, 1) trades off immediate and future rewards. P0 : S → R+0 is the distribution of the start state. RL algorithms try to find the optimal policy π∗ : S × A → R+0 , where π∗ = argmaxπ ∑∞ t=0 γ tEst+1∼T (·|st,at), at∼π(·|st),s0∼P0 [R(st, at, st+1)] . Importantly, in this work, we consider RL problems where the state s encodes both the position and the shape of the objects in the scene. We require the RL agent to generalize over all of these shapes at test time. We can therefore think of the state as a tuple s = (sp, ss), where sp encodes positional information, and ss encodes the shapes involved. We focus the experiments on sparse reward settings, meaning R(s, a, s′) = R0 > 0 for s′ ∈ Sg and R(s, a, s′) = 0 for s ∈ S\Sg, where the volume of Sg ⊂ S is much smaller than the volume of S. The state space S usually is low-dimensional or a minimal description of the degrees of freedom of the system. In this work, we consider that the RL algorithm has only access to a (high-dimensional) observation y ∈ Y of the scene (e.g., RGB images). In particular, this means that the policy has observations as input a ∼ π(· | y). Since we assume that the underlying state s = (sp, ss) is fully observable from y, we can treat y like a state for an MDP. Reinforcement Learning with Learned Latent Scene Representations. The general idea of RL with learned latent scene representations is to learn an encoder Ω that maps an observation y ∈ Y to a k-dimensional latent vector z = Ω(y) ∈ Z ⊂ Rk of the scene. The actual RL components, e.g., the Q-function or policy, then operate on z as its state description. For a policy π, this means that the action a ∼ π(· | z) = π(· | Ω(y)) is conditional on the latent vector z instead of the observation y directly. The dimension k of the latent vector is typically (much) smaller than that of the observation space Y , but larger than that of the state space S. 3.2 Neural Radiance Fields (NeRFs) The general idea of NeRF, originally proposed by [16], is to learn a function f = (σ, c) that predicts the emitted RGB color value c(x) ∈ R3 and volume density σ(x) ∈ R≥0 at any 3D world coordinate x ∈ R3. Based on f , an image from an arbitrary view and camera parameters can be rendered by computing the color C(r) ∈ R3 of each pixel along its corresponding camera ray r(α) = r(0) + αd through the volumetric rendering relation C(r) = ∫ αf αn Tf (r, α)σ(r(α))c(r(α)) dα with Tf (r, α) = exp ( − ∫ α αn σ(r(u)) du ) . (1) Here, r(0) ∈ R3 is the camera origin, d ∈ R3 the pixel dependent direction of the ray and αn, αf ∈ R the near and far bounds within which objects are expected, respectively. The camera rays are determined from the camera matrix K (intrinsics and extrinsics) describing the desired view. 4 Learning State Representations for RL with NeRF Supervision This section describes our proposed framework, in which we use a latent state space for RL that is learned from NeRF supervision. For learning the latent space, we use an encoder-decoder where the Latent-conditioned Compositional NeRF Decoder decoder is a latent-conditioned NeRF, which may either be a global [42, 43, 44] or a compositional NeRF decoder [53]. To our knowledge, no prior work has used such NeRF-derived supervision for RL. In Sec. 4.1 we describe this proposition, Sec. 4.2 provides an overview of the encoder-decoder training, Sec. 4.3 and Sec. 4.4 introduce options for the NeRF decoder and encoder, respectively. 4.1 Using Latent-Conditioned NeRF for RL We propose the state representation z on which an RL algorithm operates to be a latent vector produced by an encoder that maps images from multiple views to a latent z, which is trained with a (compositional) latent-conditioned NeRF decoder. As will be verified in experiments, we hypothesize that this framework is beneficial for the downstream RL task, as it produces latent vectors that represent the actual 3D geometry of the objects in the scene, can handle multiple objects well, as well as fuse multiple views in a consistent way to deal with occlusions by providing shape completion, all of which is relevant to solve tasks where the geometry is important. There are two steps to our framework, as shown in Fig. 1. First, we train the encoder + decoder from a dataset collected by random interactions with the environment, i.e., we do not yet need a trained policy. Second, we take the encoder trained in the first step, which we leave frozen, and use the latent space to train an RL policy. Note that we investigate two variants of the auto-encoder framework, a global one, where the whole scene is represented by one single latent vector, and a compositional one, where objects are represented by their own latent vector. For the latter, objects are identified by masks in the views. 4.2 Overview: Auto-Encoder with Latent-Conditioned NeRF Decoder Assume that an observation y = ( I1:V ,K1:V ,M1:V ) of the scene consists of RGB images Ii ∈ R3×h×w, i = 1, . . . , V taken from V many camera views, their respective camera projection matrices Ki ∈ R3×4 (including both intrinsics and extrinsics), and per-view image masks M1:V . For a global NeRF decoder, these are global non-background masks M itot ∈ {0, 1}h×w, and for a compositional NeRF decoder as in [53], these are sets of binary masks M ij ∈ {0, 1} h×w that identify the objects j = 1, . . . ,m in the scene in view i. The global case is equivalent to m = 1, M ij=1 = M i tot. The encoder Ω maps these posed image observations from the multiple views into a set of latent vectors z1:m, where each zj represents each object in the scene separately in the compositional case, or the single z1 all objects in the scene. This is achieved by querying Ω on the masks M1:Vj , i.e., zj = Ω ( I1:V ,K1:V ,M1:Vj ) ∈ Rk (2) for object j. The supervision signal to train the encoder is the image reconstruction loss Li = ∥∥Ii ◦M itot −D (Ω (I1:V ,K1:V ,M1:V1:m) ,Ki)∥∥22 (3) on the input view i where the decoder D renders an image I = D(z1:m,K) for arbitrary views specified by the camera matrix K from the set of latent vectors z1:m. Both the encoder and decoder are trained end-to-end at the same time. The target images for the decoder are the same in both the global and compositional case: the global-masked image Ii ◦M itot (◦ is the element-wise product). In the compositional case this can be computed with M itot = ∨m j=1 M i j . By fusing the information from multiple views of the objects into the latent vector from which the decoder has to be able to render the scene from multiple views, this auto-encoder framework can learn latent vectors that represent the 3D configurations (shape and pose) of the objects in the scene. 4.3 Latent-Conditioned NeRF Decoder Details Global. The original NeRF formulation [16] learns a fully connected network f that represents one single scene (Sec. 3.2). In order to create a decoder from NeRFs within an auto-encoder to learn a latent space, we condition the NeRF f(·, z) on the latent vector z ∈ Rk [42, 43, 44]. While approaches such as [42, 43, 44] use the latent code to represent factors such as lighting or categorylevel generalization, in our case the latent code is intended to represent the scene variation, i.e., shape and configuration of objects, such that a downstream RL agent may use this as a state representation. Compositional. In the compositional case, the encoder produces a set of latent vectors z1:m describing each object j = 1, . . . ,m individually, this leads to m many NeRFs (σj(x), cj(x)) = fj(x) = f(x, zj), j = 1, . . . ,m with their associated volume density σj and color value cj . Note that while one could use different networks fj with their own network weights for each object, we have a single network f for all objects. This means that both the object’s pose as well as its shape and type are represented through the latent code zj . In order to force those conditioned NeRFs to learn the 3D configuration of each object separately, we compose them into a global NeRF model with the composition formulas (proposed e.g., by [80, 81]): σ(x) = ∑m j=1 σj(x), c(x) = 1σ(x) ∑m j=1 σj(x)cj(x). As this composition happens in 3D space, the latent vectors will be learned such that they correctly represent the actual shape and pose of the objects in the scene with respect to the other objects, which we hypothesize may be useful for the downstream RL agent. 4.4 Encoder Details The encoder Ω operates by fusing multiple views together to estimate the latent vector for the RL task. Since the scientific question of this work is to investigate whether a decoder built from NeRFs to train the encoder end-to-end is beneficial for RL, we consider two different encoder architectures. The first one is a 2D CNN that averages feature encodings from the different views, where each encoding is additionally conditioned on the camera matrix of that view. The second one is based on a learned 3D neural vector field that incorporates 3D biases by fusing the different camera views in 3D space through 3D convolutions and camera projection. This way, we are able to distinguish between the importance of 3D priors incorporated into the encoder versus the decoder. Per-image CNN Encoder (“Image encoder”). For the global version, we utilize the network architecture from [11] as an encoder choice. In order to work with multiple objects in the compositional case, we modify the architecture from [11] by taking the object masks into account as follows. For each object j, the 2D CNN encoder computes zj = ΩCNN ( I1:V ,K1:V ,M1:Vj ) = hMLP ( 1 V V∑ i=1 gMLP ( ECNN ( Ii ◦M ij ) ,Ki )) . (4) ECNN is a ResNet-18 [82] CNN feature extractor that determines a feature from the masked input image Ii ◦M ij of object j for each view i, which is then concatenated with the (flattened) camera matrix. The output of the network gMLP is hence the encoding of each view, including the camera information, which is averaged and then processed with hMLP, to produce the final latent vector. Note that in the global case, we set m = 1, M ij=1 = M i tot such that ΩCNN produces a single latent vector. Neural Field 3D CNN Encoder (“Field encoder”). Several authors [43] have considered to incorporate 3D biases into learning an encoder by computing pixel-aligned features from queried 3D locations of the scene to fuse the information from the different camera views directly in 3D space. We utilize the encoder architecture from [53], where the idea is to learn a neural vector field ϕ [ I1:V ,M1:Vj ] : R3 → RE over 3D space, conditioned on the input views and masks. The features of ϕ are computed from projecting the query point into the camera coordinate system from the respective view. To turn ϕ into a latent vector, it is queried on a workspace set Xh ∈ RdX×hX×wX (a 3D grid) and then processed by a 3D convolutional network, i.e., zj = E3D CNN ( ϕ [ I1:V ,M1:Vj ] (Xh) ) . This method differs from [43, 83, 60] by computing a latent vector from the pixel-aligned features. 5 Baselines / Alternative State Representations In this section, we briefly describe alternative ways of training an encoder for RL, which we will investigate in the experiments as baselines and ablations. For details, refer to the appendix. Conv. Autoencoder. This baseline uses a standard CNN decoder based on deconvolutions instead of NeRF to reconstruct the image from the latent representation, similar to [1]. Therefore, with this baseline we investigate the influence of the NeRF decoder relative to CNN decoders. We follow the architecture of [11] for the deconvolution part for the global case. In the compositional case, we modify the architecture to be able to deal with a set of individual latent vectors instead of a single, global one. The image I = Ddeconv(gMLP( 1m ∑m j=1 zj),K) is rendered from z1:m by first averaging the latent vectors and then processing the averaged vector with a fully connected network gMLP, leading to an aggregated feature. This aggregated feature is concatenated with the (flattened) camera matrix K describing the desired view and then rendered into the image with Ddeconv. In the experiments, we utilize this decoder as the supervision signal to train the latent space produced by the 2D CNN encoder from Sec. 4.4. In the compositional version, the 2D CNN encoder (4) use the same object masks as the compositional NeRF-RL variant. Contrastive Learning. As an alternative to learning an encoder via a reconstruction loss, the idea of contrastive learning [84] is to define a loss function directly on the latent space that tries to pull latent vectors describing the same configurations together (called positive samples) while ones representing different system states apart (called negative samples). A popular approach to achieve this is with the InfoNCE loss [85, 64]. Let yi and ỹi be two different observations of the same state. Here, ·̃ denotes a perturbed/augmented version of the observation. For a mini-batch of observations {(yi, ỹi)}ni=1, after encoding those into their respective latent vectors zi = Ω(yi), z̃i = Ω(ỹi) with the encoder Ω, the loss for that batch would use (zi, z̃i) as a positive pair, and (zi,z̸̃=i) as a negative pair, or some similar variation. A crucial question in contrastive learning is how the observation y is perturbed/augmented into ỹ to generate positive and negative training pairs, described in the following. CURL. In CURL [5], the input image is randomly cropped to generate y and ỹ. We closely follow the hyperparameters and design of [5]. CURL operates on a single input view and we choose a view for this baseline from which the state of the environment can be inferred as best as possible (Fig. 17). Multi-View CURL. This baseline investigates if the neural field 3D encoder (Sec. 4.4) can be trained with a contrastive loss. As this encoder operates on multiple input views we double the number of available camera views. Half of the views are the same as in the other experiments, the other half are captured from sightly perturbed camera angles. We use the same loss as CURL, but with different contrastive pairs – rather than from augmentation, the contrastive style is taken from TCN [68]: the positive pairs come from different views but at the same moment in time, while negative pairs come from different times. Therefore, this baseline can be seen as a multi-view adaptation of CURL [5]. Direct State / Keypoint Representations. Finally, we also consider a direct, low-dimensional representation of the state. Since we are interested in generalizing over different object shapes, we consider multiple 3D keypoints that are attached at relevant locations of the objects by expert knowledge and observed with a perfect keypoint detector [8]. See Fig. 2b for a visualization of those keypoints. The keypoints both provide information about object shape and its pose. Furthermore, as seen in Fig. 2b, they have been chosen to reflect those locations in the environment relevant to solve the task. Additionally, we report results where the state is represented by the poses of the objects – as this cannot represent object shape, in this case we use a constant object shape for training and test. 6 Experiments We evaluate our proposed method on different environments where the geometry of the objects in the scene is important to solve the task successfully. Please also refer to the video https://dannydriess.github.io/nerf-rl. Commonly, RL is trained and evaluated on a single environment, where only the poses are changed, but the involved object shapes are kept constant. Since latent-conditioned NeRFs have been shown to be capable of generalizing over geometry [43], we consider experiments where we require the RL agent to generalize over object shapes within some distribution. Answering the scientific question of this work requires environments with multi-view observations — and for the compositional versions object masks as well. These are not provided in standard RL benchmarks, which is the reason for choosing the environments investigated in this work. We use PPO [86] as the RL algorithm and four camera views in all experiments. Refer to the appendix for more details about our environments, parameter choices, network architectures, and training times. 6.1 Environments Mug on Hook. In this environment, adopted from [87] and visualized in Fig. 2b, the task is to hang a mug on a hook. Both the mug and the hook shape are randomized. The actions are small 3D translations applied to the mug. This environment is challenging as we require the RL agent to generalize over mug and hook shapes and the tolerance between the handle opening and the hook is relatively small. Further, the agent receives a sparse reward only if the mug has been hung stably. This reward is calculated by virtually simulating a mug drop after each action. If the mug does not fall onto the ground from the current state, a reward of one is assigned, otherwise zero. Planar Pushing. The task in this environment, shown in Fig. 3b, is to push yellow box-shaped objects into the left region of the table and blue objects into the right region with the red pusher that can move in the plane, i.e., the action is two dimensional. This is the same environment as in [53] with the same four different camera views. Each run contains a single object on the table (plus the pusher). If the box has been pushed inside its respective region, a sparse reward of one is received, otherwise zero. The boxes in the environment have different sizes, two colors and are randomly initialized. In this environment, we cannot use keypoints for the multi-shape setting, as the reward depends on the object color; we evaluate the keypoints baseline only in the single shape case (Appendix). Door Opening. Fig. 4b shows the door environment, where the task is to open a sliding door with the red end-effector that can be translated in 3 DoFs as the action. To solve this task, the agent has to push on the door handle. As the handle position and size is randomized, the agent has to learn to interact with the handle geometry accordingly. Interestingly, as can be seen in the video in the supplementary material, the agent often chooses to push on the handle only at the beginning, as, afterwards, it is sufficient to push the door itself at its side. The agent receives a sparse reward if the door has been opened sufficiently, otherwise, zero reward is assigned. 6.2 Results Figs 2a, 3a, 4a show success rates (averaged over 6 independent experiment repetitions and over 30 test rollouts per repetition per timestep) as a function of training steps. Also shown are the 68% confidence intervals. These success rates have been evaluated using randomized object shapes and initial conditions, and therefore reflect the agent’s ability to generalize over these. In all these experiments, a latent space trained with compositional NeRF supervision as the decoder consistently outperformed all other learned representations, both in terms of sample efficiency and asymptotic performance. Furthermore, our proposed framework with compositional NeRF even outperforms the expert keypoint representation. For the door environment, the 3D neural field encoder plus NeRF decoder (NeRF-RL comp. + field) reaches nearly perfect success rates. For the other two environments, the compositional 2D CNN encoder plus NeRF decoder (NeRF-RL comp. + image) was slightly better than with the neural field encoder but not significantly. This shows that the decoder built from compositional NeRF is relevant for the performance, not so much the choice of the encoder. Training the 3D neural field encoder with a contrastive loss as supervision signal for different camera views as positive/negative training pairs is not able to achieve significant learning progress in these scenarios (Multi-CURL). However, the other contrastive baseline, CURL, which has a different encoder and uses image cropping as data augmentation instead of additional camera views, is able to achieve decent performance and sample efficiency on the door environment, but not for the pushing environment. In the mug environment, CURL initially is able to make learning progress comparable to our framework, but never reaches a success rate above 59% and then becomes unstable. Similarly, the global CNN autoencoder baseline shows decent learning progress initially on the mug and pushing scenario (not for the door), but then becomes unstable (mug) or never surpasses 50% success rate (pushing). Such variations in performance or instable learning across the different environments have not been observed with our method, which is stable in all cases. The compositional variant (NeRF-RL comp.) of our framework achieves the highest performance. Since the conv. comp. autoencoder baseline has worse performance than its global variant, compositionality alone is not the sole reason for the better performance of our state representation. Indeed, the global NeRF-RL + image variant in the pushing env. is also better than all other baselines. In the appendix Sec. A.1, we find a positive correlation between NeRF reconstruction quality and RL performance. Furthermore, it turns out that the performance of our framework is not significantly affected when we pretrain the encoder with less data (Sec. A.2). In Sec. A.3, we investigate the influence of the number of input views on the RL performance. In the pushing scenario, only two or even one input view are sufficient for good performance. However, for tasks that require more 3D understanding such as the mug scenario, we observe a drop in performance when reducing the number of views from 4 to 2. 7 Discussion Why NeRF provides better supervision. The NeRF training objective (1) strongly forces each f(·, zj) to represent each object in its actual 3D configuration and relative to other objects in the scene (compositional case), including their shape. This implies that the latent vectors zj have to contain this information, i.e., they are trained to determine the object type, shape and pose in the scene. In the global case, z1 has to represent the geometry of the whole secne. As the tasks we consider require policies to take the geometry of the objects into account, we hypothesize that a latent vector that is capable of parameterizing a NeRF to reconstruct the scene in the 3D space has to contain enough of the relevant 3D information of the objects also for the policy to be successful. Masks. In order for the auto-encoder framework to be compositional, it requires object masks. We believe that instance segmentation has reached a level of maturity [88] that this is a fair assumption to make. As we also utilize the individual masks for the compositional conv. autoencoder and the multi-view CURL baseline, which do not show good performance, it indicates that the masks are not the main reason that our state representation achieves higher performance. This is further supported by the fact that the global NeRF-RL variant which does not rely on individual object masks on the pushing scenario achieved a performance higher than all baselines, i.e., masks will increase the performance of NeRF-RL as they enable the compositional version, but they do not seem essential. Offline/Online. In this work, we focused on pretraining the latent representation offline from a dataset collected by random actions. During RL, the encoder is fixed and only the policy networks are learned. This has the advantage that the same representation can be used for different RL tasks and the dataset to train the representation not necessarily has to come from the same distribution. However, if a policy is needed to explore reasonable regions of the state space, collecting a dataset offline to learn a latent space that covers the state space sufficiently might be more challenging for an offline approach. This was not an issue for our experiments where data collection with random actions was sufficient. Indeed, we show generalization over different starting states of the same environment and with respect to different shapes (within distribution). Future work could investigate NeRF supervision in an online setup. Note that the reconstruction loss via NeRF is computationally more demanding than via a 2D CNN deconv. decoder or a contrastive term, making NeRF supervision as an auxiliary loss at each RL training step costly. One potential solution for this is to apply the auxiliary loss not at every RL training step, but with a lower frequency. Regarding computational efficiency, this is where contrastive learning has an advantage over our proposed NeRF-based decoder, as the encoding with CURL can be trained within half a day, whereas the NeRF auto-encoder took up to 2 days to train for our environments. However, when using the encoder for RL, there is no difference in inference time. Multi-View. The auto-encoder framework we propose can fuse the information of multiple camera views into a latent vector describing an object in the scene. This way, occlusions can be addressed and the agent can gain a better 3D understanding of the scene from the different camera angles. Having access to multiple camera views and their camera matrices is an additional assumption we make, although we believe the capability to utilize this information is an advantage of our method. 8 Conclusion In this work, we have proposed the idea to utilize Neural Radiance Fields (NeRFs) to train latent spaces for RL. Our environments focus on tasks where the geometry of the objects in the scene is relevant for successfully solving the tasks. Training RL agents with the pretrained encoder that maps multiple views of the scene to a latent space consistently outperformed other ways of learning a state representation and even keypoints chosen by expert knowledge. Our results show that the 3D prior present in compositional NeRF as the decoder is more important than priors in the encoder. Broader Impacts. Our main contribution is a method to learn representations that improve the efficiency of vision-based RL, which could impact automation. As such, our work inherits general ethical risks of AI, like the question of how to address the potential of increased automation in society. Acknowledgments The authors thank Russ Tedrake for initial discussions; Jonathan Tompson and Jon Barron for feedback on drafts; Vincent Vanhoucke for encouraging latent NeRFs. This research has been supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC 2002/1 “Science of Intelligence” – project number 390523135. Danny Driess thanks the International Max-Planck Research School for Intelligent Systems (IMPRS-IS) for the support. Ingmar Schubert acknowledges support by the German Academic Scholarship Foundation. Yunzhu Li acknowledges support by Amazon.com Services LLC, PO# #2D-06310236 and the Wistron Corporation.
1. What is the focus and contribution of the paper on Neural Radiance Fields-based RL? 2. What are the strengths of the proposed approach, particularly in terms of its experiments and reproducibility? 3. What are the weaknesses of the paper regarding its limitations and generalization capabilities? 4. Do you have any concerns about the applicability of NeRF-RL to various types of RL tasks and environments? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper propose a Neural Radiance Fields-based RL (NeRF-RL) method, which makes use of supervision from NeRF to learn state representations describing the objects in the 3D scene. Several experiments involving robotic object manipulations were conducted to demonstrate the superior performance of NeRF-RL over existing methods. Strengths And Weaknesses ###########Strengths########### The paper is well written, and motivated. I believe that the experiments are roughly solid and reproducible. ###########Weaknesses########### The paper is lack of adequate analysis and discussion about the limitation of NeRF-RL. The experimental scenes are simple and clear, and have only a small number of objects. I concern about whether NeRF-RL can generalize well to complicated 3D scenes, such as VizDoom and Minecraft. Questions Q1. Does NeRF-RL limit to 3D scenes? Can NeRF-RL be used to solve 2D RL tasks such as Atari games? I think it is necessary to discuss whether and how NeRF-RL applies to general RL tasks. Q2. NeRF-RL requires plenty of offline RL data to train the NeRF encoder before the policy training process. However, offline RL data are difficult to obtain for realistic 3D environments. Q3. As mentioned before, all the experimental scenes are simple and clear. Can NeRF-RL generalize well to complicated 3D scenes with a large number of objects? Q4. Does NeRF require more offline RL data and larger neural network to learn state representations than baseline methods? Limitations See the questions.
NIPS
Title Beta R-CNN: Looking into Pedestrian Detection from Another Perspective Abstract Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be attributed mostly to the widely used representation of pedestrians, i.e., 2D axis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrian by explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability values to pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What’s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes. Code will be released at github.com/Guardian44x/Beta-R-CNN. 1 Introduction Pedestrian detection is a critical research topic in computer vision field with various real-world applications such as autonomous vehicles, intelligent video surveillance, robotics, and so on. During the last decade, with the rise of deep convolutional neural networks (CNNs), great progress has been achieved in pedestrian detection. However, it remains challenging to accurately distinguish pedestrians in occluded and crowded scenes. Although extensive methods have been attempted for occlusion and crowd issues, the performance is still limited by pedestrian representation, i.e., 2D bounding box representation. The axis-aligned minimum bounding box is widely utilized to explicitly define a distinct object, with its approximate location and size. Although box representation has advantages such as parameterization- and annotation-friendly as the identity of an object, some nonnegligible drawbacks are limiting the performance of pedestrian detection especially in occluded and crowded scenes. Firstly, the bounding box can be regarded as modeling the object as a uniform distribution in the box, but it actually goes against our intuitive perception. Given an occluded pedestrian, what attracts our attention should be the visible part rather than the occluded noise. Secondly, based on box representation, intersection ∗These authors contributed equally 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. BBox representation 2-value Mask Beta Representation fIoU:0.74, vIoU:0.21, KL:9.95 fIoU:0.68, vIoU:0.31, KL:10.34 fIoU:0.61, vIoU:0.45, KL:8.28 fIoU:0.84, vIoU:0.19, KL:12.47 Full Box Visible Box over union (IoU) serves as the metric to measure the difference between objects, which results in difficulty to distinguish highly-overlapped instances in crowded scenes. As shown in Fig. 2, even if the detectors succeed to identify different human instances in a crowded scene, the highly-overlapped detections may also be suppressed by the post-processing of non-maximum suppression (NMS). Last, the full-body and visible boxes treat a distinct person as two separate parts, which omit their inner relationship as a whole and lead to difficulty for model optimization. To eliminate the weaknesses of box representation and preserve its advantages in the meanwhile, we propose a novel representation for pedestrians based on 2D beta distribution, named Beta Representation. In probability theory, the beta distribution is a family of continuous probability distribution defined in the interval [0, 1], as depicted in Fig. 1. By assigning different values to α, β, we could control the shape of the beta distribution, especially the peak and the full width at half maximum (FWHM), which is naturally suitable for pedestrian representation with unpredictable visible patterns. We take each pedestrian as a 2D beta distribution on the image and generate eight new parameters as the Beta Representation. As illustrated in Fig. 2, the boundary of 2D beta distribution is consistent with the full-body box, while the peak along with FWHM depends on the relation between the visible part and full-body box. Compared with paired boxes, i.e., full-body and visible boxes, 2D beta distribution treats each pedestrian more like an integrated whole and emphasizes the object center of visual mass meanwhile. Besides, instead of IoU, Kullback-Leibler (KL) divergence is adopted as a new metric to measure the distance of two objects and the beta-distribution-based NMS strategy is named BetaNMS. Fig. 2 illustrates that while the bounding boxes are too close to distinguish (fIoU > 0.5, vIoU > 0.32), the 2D beta distributions still maintain high discrimination (KL > 7) between each other, thereby leading to better performance in distinguishing highly-overlapped instances. Moreover, to fully exploit Beta Representation in pedestrian detection, we design a novel pedestrian detector named Beta R-CNN, equipped with two different key modules, i.e., BetaHead and BetaMask. BetaHead is utilized to regress the eight beta parameters and the class score, while BetaMask serves as an attention mechanism to modulate the extracted feature with beta-distribution-based masks. Experiments on the extremely crowded benchmark CrowdHuman [1] and CityPersons [2] show that our proposed approach can outperform the state-of-the-art results, which strongly validate the superiority of our method. 2 Related Work Pedestrian Detection. Pedestrian detection can be viewed as object detection for the specific category. With the development of deep learning, CNN-based detectors can be roughly divided into two categories: the two-stage approaches [3, 4] comprise separate proposal generation followed by classification and regression module to refine the proposals; and the one-stage approaches [5–7] perform localization and classification simultaneously on the feature maps without the separate 2FIoU and vIoU are the IoU calculated based on full-body/visible boxes respectively. proposal generation module. Most existing pedestrian detection methods employ either the singlestage or two-stage strategy as their model architectures. Occlusion Handling. In pedestrian detection, occlusion leads to misclassifying pedestrians. A common strategy is the part-based approaches [8–11], which ensemble a series of body-part detectors to localize partially occluded pedestrians. Also some methods train different models for most frequent occlusion patterns [12, 13] or model different occlusion patterns in a joint framework [14, 15], but they are all just designed for some specific occlusion patterns and not able to generalize well in other occluded scenes. Besides, attention mechanism has been applied to handle different occlusion patterns [9, 16]. MGAN [16] introduces a novel mask guided attention network, which emphasizes visible pedestrian regions while suppressing the occluded parts by modulating extracted features. Moreover, a few recent works [17, 18] have exploited to utilize annotations of the visible box as extra supervisions to improve pedestrian detection performance. Crowdness Handling. As for crowded scenes, except for the misclassifying issues, crowdedness makes it difficult to distinguish highly-overlapped pedestrians. A few previous works propose new loss functions to address the problem of crowded detections. For example, OR-CNN [8] proposes aggregation loss to enforce proposals to be close to the corresponding objects and minimize the internal region distances of proposals associated with the same objects. RepLoss [19] proposes Repulsion Loss, which introduces extra penalty to proposals intertwined with multiple ground truths. Moreover, some advanced NMS strategies [20–23, 18] are proposed to alleviate the crowded issues to some extent, but they still take IoU as the metric to measure the difference between detected objects, which limits the performance on identifying highly-overlapped instances from crowded boxes. Object Representation. In computer vision, object representation is one primary topic, and there are many representations for objects in 2D images, such as 2D bounding boxes [4], polygons [24], splines [25], and pixels [26]. Each has strengths and weaknesses from a specific application’s practical perspective, providing annotation cost, information density, and variable levels of fidelity. Distribution-based representation has also been tried in [27] which utilizes the bivariate normal distribution as the representation of objects. However, when transformed from bounding boxes rather than segmentation, the mean and variance of bivarite normal distribution are still consistent with the center and scale. Besides, its performance is considerably poor compared to other methods. In this paper, Beta Representation provides a more detailed representation for occluded pedestrians, along with a new metric to substitute for IoU and a new detector Beta R-CNN, thereby alleviating the occlusion and crowd issues to a great extent. 3 Method In this section, we first introduce the parameterized Beta Representation for pedestrians. Then to fully exploit the Beta Representation, a novel pipeline Beta R-CNN is proposed. Moreover, a specific NMS strategy based on beta distribution and KL divergence, i.e., BetaNMS, is analyzed in detail. 3.1 Beta Representation 3.1.1 Beta Distribution In probability theory and mathematical statistics, the beta distribution is a family of one-dimensional continuous probability distribution defined in the interval [0, 1], parameterized by two positive shape parameters α and β. For 0 ≤ x ≤ 1 and shape parameters α, β > 0, the probability density function (PDF) of beta distribution is a exponential function of the variable x and its reflection (1 − x) as follows: Be(x;α, β) = Γ(α+ β) Γ(α)Γ(β) · x(α−1)(1− x)(β−1) = 1 B(α, β) · x(α−1)(1− x)(β−1), (1) where Γ(z) is the gamma function andB(α, β) is a normalization factor to ensure the total probability is 1. Some beta distribution samples are shown in Fig. 1. According to the above definition, the mean µ, variance σ2 and shape parameter ν can be formulated as follows: µ = E(x) = α α+ β , σ2 = E(x− µ)2 = αβ (α+ β)2(α+ β + 1) , ν = α+ β. (2) 3.1.2 Beta Representation for Pedestrian As introduced in Sec. 3.1.1, the beta distribution has two key characteristics: 1) Boundedness, the beta distribution is defined in the interval [0, 1]; 2) Asymmetry, the peak and FWHM can be controlled by parameters α and β. These two characteristics make beta distribution suitable to describe the location, shape and visible pattern of occluded pedestrians. Parameterized Beta Representation is generated from the two annotated boxes, i.e., full-body and visible boxes. Considering bounding box is a 2D representation and it is always axis-aligned, we utilize two independent beta distributions on the x-axis and y-axis respectively. As mentioned before, we take the full-body box as the boundary of 2D beta distribution, while the peak along with FWHM depends on the relation between the visible part and full-body box. However, the transition relation between the peak, FWHM and the parameters α, β is hard to formulate. Instead, we calculate the mean and variance of the beta distribution with different weights assigned to the visible part and non-visible part, formulated as follows: µx = ∫ rf lf xf(x)dx∫ rf lf f(x)dx , σx 2 = ∫ rf lf (x− µx)2f(x)dx∫ rf lf f(x)dx , µy = ∫ bf tf yf(y)dy∫ bf tf f(y)dy , σy 2 = ∫ bf tf (y − µy)2f(y)dy∫ bf tf f(y)dy , (3) where [lf , tf , rf , bf ], [lv, tv, rv, bv] denote the full-body box and visible box respectively, and f(x) is defined as the weight of each pixel based on the visibility: f(x) = { Wv, lv ≤ x ≤ rv Wf , others , f(y) = { Wv, tv ≤ y ≤ bv Wf , others , (4) where Wf = 0.04,Wv = 1 in our experiments and the size of visible box can be approximated as wv = ρσx, hv = ρσy (ρ = √ 12). Finally, we can calculate the parameters α, β according to the normalized mean and variance, while λ (set to ρ/4) is a constant to keep α, β > 1: µx = µx − l r − l , µy = µy − t b− t , σx = λ · σx r − l , σy = λ · σy b− t , νx = αx + βx = µx(1 + µx) σ2x − 1, νy = αy + βy = µy(1 + µy) σ2y − 1, αx = µxνx = µx( µx(1 + µx) σ2x − 1), αy = µyνy = µy( µy(1 + µy) σ2y − 1), βx = (1− µx)νx, βy = (1− µy)νy. (5) Generally speaking, for each pedestrian, Beta Representation is parameterized by eight parameters, i.e., [l, t, r, b, αx, βx, αy, βy], where[l, t, r, b] are the boundaries indicating the location on the image, and [αx, βx, αy, βy] are the shape parameters of the 2D beta distribution describing the visibility of pedestrians. The probability density function of the 2D beta distribution over the whole image is formulated as follows: P (x, y) = { C ·Be(x̄; αx, βx) ·Be(ȳ; αy, βy), l ≤ x ≤ r, t ≤ y ≤ b, 0, others, (6) where x̄ = (x− l)/(r − l), ȳ = (y − t)/(b− t), and C is a normalization factor to keep the sum of PDF to 1. For pixels inside the beta boundary, the probability values are consistent with the product of two one-dimensional beta distribution, otherwise the probability values are set to zeros. Backbone RPN (Beta Head) Class Box RoI Pooling Beta Head Class Beta Beta Mask RoI Pooling Beta Head Class Beta 3.1.3 Advantages Our proposed Beta Representation shows several impressive advantages. Firstly, it is more precise in terms of the shape and visibility of pedestrians compared with box representation. While the bounding box models the object as a uniform distribution inside the box, 2D beta distribution concentrates more on the center of visual mass. Secondly, compared with the paired boxes, i.e., full-body box along with visible box, 2D beta distribution treats the pedestrian more like an integrated whole rather than two individual parts. Last, it can handle a few problematic situations such as identifying highly-occluded and highly-overlapped objects, which will be discussed in detail. Moreover, it is worth mentioning that pixel-wise annotations in segmentation can also be transformed to the parameterized Beta Representation based on the above equations. 3.2 Beta R-CNN To better implement the Beta Representation, we introduce a new detector named Beta R-CNN inspired by Faster R-CNN [4] and Cascade R-CNN [28]. The architecture is shown in Fig. 3. BetaHead and BetaMask are two core modules in Beta R-CNN. In the following section, we will discuss them respectively. 3.2.1 BetaHead Since we adopt Beta Representation to describe a pedestrian, BetaHead is designed to regress the eight beta parameters, i.e., [l, t, r, b, αx, βx, αy, βy], which is analogous to the regression head in vanilla Faster R-CNN. Specifically, as α, β are too abstractive to learn, we adopt the mean and variance as regression targets, i.e., [l, t, r, b, µx, µy, σx, σy]. The four boundary parameters, i.e., [l, t, r, b], utilize the same normalization strategy introduced in [4]. And for the other four shape parameters, i.e., [µx, µy, σx, σy], we adopt the normalization as follows: tµx = (µx − xa)/wa, tµy = (µy − ya)/ha, tσx = log(σx/wa), tσy = log(σy/ha), t∗µx = (µ ∗ x − xa)/wa, t∗µy = (µ ∗ y − ya)/ha, t∗σx = log(σ ∗ x/wa), t ∗ σy = log(σ ∗ y/ha), (7) where x, y, w, h denote the center coordinates and size of the boundary; µx, σx, µy, σy denote the mean and variance of the object; µ and µ∗ stand for the predicted and ground-truth beta respectively, while subscript a denotes the anchor box. SmoothL1 loss is adopted to optimize the BetaHead. 3.2.2 BetaMask BetaMask is another novel module introduced in Beta R-CNN. Most pedestrian detectors treat the whole extracted features of a person equally important, which will result in poor performance for high-occluded scenes due to the obvious noise. As we introduced in Sec. 3.1, Beta Representation itself has different focuses to picture a person, which emphasizes the visible part in occluded scenes. It is very intuitive to adopt attention mechanism with 2D beta distribution to highlight the features of visible parts and suppress other noise simultaneously, which could induce the network to pay more attention to the discriminative features and achieve better localization accuracy and higher confidence. Different from the common attention mechanism, our proposed BetaMask is based on 2D beta distribution, which is more targeted. In this paper, we directly generate the mask based on prediction results of the previous BetaHead instead of a CNN module like [16], as the beta mask is more like a parameterized probability distribution and it is difficult to keep the consistency of the distribution with convolutional kernels. Referring to equation (5), we get [αx, βx, αy, βy] from the predicted [l, t, r, b, µx, µy, σx, σy], and the mask values are sampled from the 2D beta distribution Be(x, y;αx, βx, αy, βy) = C · Be(x;αx, βx) · Be(y;αy, βy). Then we utilize the element-wise product to modulate the pooled feature with sampled beta masks. Finally, we use KL divergence as the loss function to supervise the BetaMask module: Lmask = ΣBe ∗(x, y)(logBe∗(x, y)− logBe(x, y)), (8) where Be∗(x, y) refers to the distribution generated from the ground truth, while Be(x, y) is generated from the predicted beta parameters. 3.3 BetaNMS When it comes to NMS, instead of taking IoU as the metric to measure the difference between detected objects, we follow [27] to utilize KL divergence as an alternative, but based on 2D beta distribution rather than bivariate normal distribution in [27]. KL divergence is defined as follows: DKL(p||q) = ∑ x,y p(x, y)(log(p(x, y))− log(q(x, y))), (9) where p and q refer to two parameterized distributions. In practice, to keep the symmetry of the distance metric, we adopt the symmetrified KL divergence D̄KL(p||q) as: D̄KL(p||q) = (DKL(p||q) +DKL(q||p))/2. (10) Figure. 4 shows significant differences between symmetrized KL divergence metric and IoU metric on the CrowdHuman validation set. Each dot stands for a pair of two overlapped (fIoU > 0) pedestrians in the same scene, while there are 206088 dots in each graph. When we adopt KL divergence and IoU to perform non-maximum suppression between the above paired boxes respectively, we find only 2844 failed cases based on KL divergence while there are more than 10000 failed cases based on IoU neither fIoU nor vIoU. The comparisons actively demonstrate the superiority of our proposed Beta Representation and the BetaNMS strategy. More details will be shown in experiments. 4 Experiment 4.1 Datasets CityPersons Dataset. The CityPersons dataset [2] is a subset of Cityscapes which only consists of person annotations. There are 2975 images for training, 500 and 1575 images for validation and testing. The average number of pedestrians in an image is 7. We evaluate our proposed method under the full-body setting, following the evaluation protocol in [2], and the partition of validation set follows the standard setting in [19] on account of visibility: Heavy [0, 0.65], Partial [0.65, 0.9], Bare [0.9, 1], Reasonable [0.65, 1]. CrowdHuman Dataset. The CrowdHuman dataset [1], has been recently released to specifically target the crowd issue in the human detection task. There are 15000, 4370, and 5000 images in the training, validation, and testing set respectively. The average number of persons in an image is 22.6, which is much more crowded than other pedestrian datasets. All the experiments are trained on the CrowdHuman training set and evaluated on the validation set. Evaluation Metric. AP (Averaged Precision), which is the most popular metric for detection. AP reflects both the precision and recall ratios of the detection results. Larger AP indicates better performance. MR−2, which is short for log-average Miss Rate on False Positive Per Image (FPPI) in [29], is commonly used in pedestrian detection. Smaller MR−2 indicates better performance. MR−2 emphasizes FP and FN more than AP, which are critical in pedestrian detection. 4.2 Implementation Details In this paper, we adopt Feature Pyramid Network (FPN) [30] with ResNet-50 [31] as the backbone for all the experiments. The two-stage Cascade R-CNN [28] is taken as our baseline detection framework to perform coarse-to-fine mechanism for more accurate beta prediction. As for anchor settings, we follow the same anchor scales in [30], while the aspect ratios are set to H : W = {1 : 1, 2 : 1, 3 : 1}. For training, the batch size is 16, split to 8 GPUs. Each training round includes 16000 iterations on CityPersons and 40000 iterations on CrowdHuman. The learning rate is initialized to 0.02 and divided by 10 at half and three-quarter of total iterations respectively. During training, the sampling ratio of positive to negative proposals for RoI branch is 1 : 1 for CrowdHuman and 1 : 4 for CityPersons. On CityPersons, the input size for both training and testing is 1024× 2048. On CrowdHuman, the short edge of each image is resized to 800 pixels for both training and testing. It is worth mentioning that the proposed components like BetaHead in Beta R-CNN are all optimization-friendly, thus there is no essential difference between Beta R-CNN and Faster R-CNN [4] or Cascade R-CNN [28] for model training and testing. 4.3 Ablation Study on CrowdHuman Ablation study and main results. Table 1 shows the ablation experiments of the proposed Beta R-CNN in Sec. 3, including BetaHead, BetaMask, Mask Loss, and BetaNMS. The baseline is a two-stage Cascade R-CNN with default settings introduced in Sec. 4.2. As we claimed in Sec. 3, it is clear that our method consistently improves the performance in all criteria. BetaHead and BetaMask are proposed to implement Beta Representation and alleviate the occluded issue with new regression targets and attention mechanism, which surely reduce the MR−2 from 43.8% to 41.3% and improve AP from 85.2% to 87.1%. And the Mask Loss, i.e., equation 8, helps model get a more accurate mask. Moreover, the improvement of BetaNMS well demonstrates the superiority over the IoU-based NMS. We further analyze the role of each module. Beta Representation could picture more details of the shape and visibility of pedestrians especially in occluded and crowded scenes, and BetaMask adopts attention mechanism by utilizing 2D beta distribution to modulate more discriminative features, which enhances Beta R-CNN further. At last, BetaNMS eliminates the inherent drawback of IoU-based NMS when it meets highly-overlapped instances under crowded scenes. More details can be found in Sec. 3. Comparison with various NMS strategies. To powerfully illustrate the effectiveness of the BetaNMS, we compare BetaNMS with IoU-based NMS on full-body/visible boxes (visible boxes are approximately transformed from Beta Representation). Results are shown in Table 2 and all reported experiments here are based on Beta R-CNN. BetaNMS outperforms all other NMS methods with a large margin. Compared with fIoU-, vIoU-based NMS tends to recall more overlapped instances but bring in more false positives meanwhile, reflecting in the higher MR−2 and AP. In addition, although we integrate fIoU and vIoU in NMS, we can find BetaNMS still outperforms by at least 0.4% on MR−2 and 1.5% on AP, which means BetaNMS surely better distinguishes highly overlapped instances than IoU-based NMS, whether it is based on the full-body box or visible box or both. Speed/accuracy trade off. Each proposed module in Beta R-CNN is light-weight with little computation cost. We take CrowdHuman validation set with 800x1400 input size to conduct speed experiments on NVIDIA 2080Ti GPU with 8 GPUs, and the average speeds are 0.483s/image ( Cascade R-CNN baseline) and 0.487s/image (Beta R-CNN) respectively. The difference can be negligible. 4.4 State-of-the-art (SOTA) Comparison on CrowdHuman Comparisons with some recent methods on the CrowdHuman validation set are shown in Table 3. It clearly shows that our Beta R-CNN outperforms others with a large margin, especially on the metric MR−2. Such a large gap demonstrates the superiority of our Beta R-CNN. It is worth noting that CrowdDet [32] achieves a little higher AP than ours, which attributes to its motivation, i.e., laying emphasis on larger recall at the expense of more false positives, reflecting in higher MR−2 than ours. 4.5 Experiments on CityPersons To further verify the generalization ability of our method, we also conduct experiments on CityPersons. Table 4 compares Beta R-CNN with some state-of-the-art methods. For a fair comparison, we only list those methods that follow the standard settings, i.e., adopting subset partition criterion in [19] and feeding images with original size as inputs when performing evaluation. Because of the space limit, we will report the results with 1.3x enlarged input images in our supplementary materials. From the table, we can see that our Beta R-CNN outperforms all published methods on all four subsets, especially with a large margin on the Heavy subset, which verifies that our method is effective in occluded and crowded scenes. 5 Conclusion In this paper, we propose a statistic representation for occluded pedestrians based on 2D beta distributions, which takes the paired boxes as an integrated whole and emphasize the object center of visual mass. Besides, Beta R-CNN, equipped with BetaHead and BetaMask, aims to alleviate the pedestrian detection in occluded and crowded scenes. BetaNMS could effectively distinguish highly-overlapped instances based on Beta Representation and KL divergence. The quantitative and qualitative experiments powerfully demonstrate the superiority of our methods. Beta Representation, as well as BetaHead, BetaMask, BetaNMS are all flexible enough to be integrated into other two-stage or single-shot detectors and are also compatible with existing optimization methods to further boost their performance. Moreover, our method could be extended to more general scenes and other detection tasks. Acknowledgements This work was supported in part by the National Key Research and Development Program of China under Grant 2016QY02D0304 and the National Natural Science Foundation of China under Grant 60572002. Broader Impact Our contributions focus on the novel representation and pipeline for pedestrian detection, which can be extended to other computer vision tasks. Also, it may provide new ideas for follow-up research. It therefore has the potential to advance both the beneficial and harmful applications of object detectors, such as autonomous vehicles, intelligent video surveillance, robotics and so on. As for ethical aspects and future societal consequences, this technology can bring harmful or beneficial effects to the society, which depends on the citizens who have evil or pure motivation and who can make good use of this technological progress.
1. What is the focus of the paper regarding pedestrian detection? 2. What are the strengths of the proposed Beta Representation compared to traditional bounding box representation? 3. What are the weaknesses of the paper, particularly in terms of the title and experimental design? 4. Do you have any concerns or suggestions regarding the potential applications and future improvements of the Beta Representation?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper proposes a representation called “Beta Representation” as opposed to bounding box based representation for detecting pedestrians from a crowded scene. The new representation is based on 2D beta distributions with 8 parameters (two sets of alpha and beta parameters for the 2D beta distributions as well as four parameters to localize the pedestrian). The representation models a pedestrian with a pixel-level map with the traditional visible and full body bounding boxes implicitly represented in the parameters of the 2D beta distribution. The main advantage of beta representation supposedly is its loosening of the constraint purely full-body bounding boxes impose on the object representation, i.e. the assumption that the object is mostly visible. Hence, this new pixel-level representation could potentially alleviate the drawbacks of full body bounding box representation in the presence of extreme occlusion that is typical of crowded scenes. A specific implementation of this representation was performed using a new R-CNN model by introducing two components, i.e. beta head (for regression of the 8 2D beta parameters) and beta mask (to introduce 2D beta masks). Furthermore, the authors also replaced the common non-maximal suppression (NMS) with a beta version to avoid highly occluded pedestrians from being suppressed in the regular NMS in post-processing. For comparison of two beta representations, the authors use KL divergence as metric as opposed to IoU, which is common in comparing two bounding boxes. The authors performed experiments on two standard datasets (CityPersons and CrowdHuman) that contain crowded pedestrian scenes to show the advantage of their representation over traditional bounding box detection. Strengths - 2D axis aligned bounding box based representation have several drawbacks and had implicit assumptions that might break in real-world scenarios. Bounding box representation assumes the relative orientation of the camera with respect to the pedestrian under consideration is unchanged and mostly assumed the pedestrian could be axis-aligned. However, in real-world situations such as network of camera placements, several pedestrian positions such as sitting and leaning in different poses, and occlusions would make it difficult to represent the pedestrian with an axis-aligned bounding boxes. The proposed representation has the potential to tightly fit to the pedestrian and could potentially help in severe occlusions. - They clearly articulated the problem, the existing bounding box solutions, their drawbacks, and the advantages of the 2D beta representation. - The selection of performance metrics is sound to reflect balanced performance and fair comparison with existing methods. - The ablation study seems sound testing the progressive inclusions of each of the new/modified components. Weaknesses - The title is too broad although the paper is proposing this new representation. It is essentially defining what “Beta Representation” is. However, the paper is proposing a specific implementation using specific R-CNN model. Hence, I advise changing the tittle to reflect the scope of the specific implementation and experiments performed. This is especially useful as R-CNN is not state-of-the art detector anymore for a variety of reasons. Hence, to distinguish this work from future works that could employ more advanced detection model that will build on top of this work, the title should be a bit scoped despite the introduction of the representation in this work. Generally 2D beta representation is not completely a new concept. It is used in hand trajectory representation among other tasks. - In the larger dataset (CrowdHuman) the lack of separate test set and their evaluation on the validation set is limiting. To the authors defense this could be due to the dataset inherent split and hence for making comparison with existing methods it might makes sense to use the existing split. However, the authors could also attempt to perform their own split with complete training, validation, and test set and reevaluate the existing methods using this split for fair comparisons.
NIPS
Title Beta R-CNN: Looking into Pedestrian Detection from Another Perspective Abstract Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be attributed mostly to the widely used representation of pedestrians, i.e., 2D axis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrian by explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability values to pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What’s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes. Code will be released at github.com/Guardian44x/Beta-R-CNN. 1 Introduction Pedestrian detection is a critical research topic in computer vision field with various real-world applications such as autonomous vehicles, intelligent video surveillance, robotics, and so on. During the last decade, with the rise of deep convolutional neural networks (CNNs), great progress has been achieved in pedestrian detection. However, it remains challenging to accurately distinguish pedestrians in occluded and crowded scenes. Although extensive methods have been attempted for occlusion and crowd issues, the performance is still limited by pedestrian representation, i.e., 2D bounding box representation. The axis-aligned minimum bounding box is widely utilized to explicitly define a distinct object, with its approximate location and size. Although box representation has advantages such as parameterization- and annotation-friendly as the identity of an object, some nonnegligible drawbacks are limiting the performance of pedestrian detection especially in occluded and crowded scenes. Firstly, the bounding box can be regarded as modeling the object as a uniform distribution in the box, but it actually goes against our intuitive perception. Given an occluded pedestrian, what attracts our attention should be the visible part rather than the occluded noise. Secondly, based on box representation, intersection ∗These authors contributed equally 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. BBox representation 2-value Mask Beta Representation fIoU:0.74, vIoU:0.21, KL:9.95 fIoU:0.68, vIoU:0.31, KL:10.34 fIoU:0.61, vIoU:0.45, KL:8.28 fIoU:0.84, vIoU:0.19, KL:12.47 Full Box Visible Box over union (IoU) serves as the metric to measure the difference between objects, which results in difficulty to distinguish highly-overlapped instances in crowded scenes. As shown in Fig. 2, even if the detectors succeed to identify different human instances in a crowded scene, the highly-overlapped detections may also be suppressed by the post-processing of non-maximum suppression (NMS). Last, the full-body and visible boxes treat a distinct person as two separate parts, which omit their inner relationship as a whole and lead to difficulty for model optimization. To eliminate the weaknesses of box representation and preserve its advantages in the meanwhile, we propose a novel representation for pedestrians based on 2D beta distribution, named Beta Representation. In probability theory, the beta distribution is a family of continuous probability distribution defined in the interval [0, 1], as depicted in Fig. 1. By assigning different values to α, β, we could control the shape of the beta distribution, especially the peak and the full width at half maximum (FWHM), which is naturally suitable for pedestrian representation with unpredictable visible patterns. We take each pedestrian as a 2D beta distribution on the image and generate eight new parameters as the Beta Representation. As illustrated in Fig. 2, the boundary of 2D beta distribution is consistent with the full-body box, while the peak along with FWHM depends on the relation between the visible part and full-body box. Compared with paired boxes, i.e., full-body and visible boxes, 2D beta distribution treats each pedestrian more like an integrated whole and emphasizes the object center of visual mass meanwhile. Besides, instead of IoU, Kullback-Leibler (KL) divergence is adopted as a new metric to measure the distance of two objects and the beta-distribution-based NMS strategy is named BetaNMS. Fig. 2 illustrates that while the bounding boxes are too close to distinguish (fIoU > 0.5, vIoU > 0.32), the 2D beta distributions still maintain high discrimination (KL > 7) between each other, thereby leading to better performance in distinguishing highly-overlapped instances. Moreover, to fully exploit Beta Representation in pedestrian detection, we design a novel pedestrian detector named Beta R-CNN, equipped with two different key modules, i.e., BetaHead and BetaMask. BetaHead is utilized to regress the eight beta parameters and the class score, while BetaMask serves as an attention mechanism to modulate the extracted feature with beta-distribution-based masks. Experiments on the extremely crowded benchmark CrowdHuman [1] and CityPersons [2] show that our proposed approach can outperform the state-of-the-art results, which strongly validate the superiority of our method. 2 Related Work Pedestrian Detection. Pedestrian detection can be viewed as object detection for the specific category. With the development of deep learning, CNN-based detectors can be roughly divided into two categories: the two-stage approaches [3, 4] comprise separate proposal generation followed by classification and regression module to refine the proposals; and the one-stage approaches [5–7] perform localization and classification simultaneously on the feature maps without the separate 2FIoU and vIoU are the IoU calculated based on full-body/visible boxes respectively. proposal generation module. Most existing pedestrian detection methods employ either the singlestage or two-stage strategy as their model architectures. Occlusion Handling. In pedestrian detection, occlusion leads to misclassifying pedestrians. A common strategy is the part-based approaches [8–11], which ensemble a series of body-part detectors to localize partially occluded pedestrians. Also some methods train different models for most frequent occlusion patterns [12, 13] or model different occlusion patterns in a joint framework [14, 15], but they are all just designed for some specific occlusion patterns and not able to generalize well in other occluded scenes. Besides, attention mechanism has been applied to handle different occlusion patterns [9, 16]. MGAN [16] introduces a novel mask guided attention network, which emphasizes visible pedestrian regions while suppressing the occluded parts by modulating extracted features. Moreover, a few recent works [17, 18] have exploited to utilize annotations of the visible box as extra supervisions to improve pedestrian detection performance. Crowdness Handling. As for crowded scenes, except for the misclassifying issues, crowdedness makes it difficult to distinguish highly-overlapped pedestrians. A few previous works propose new loss functions to address the problem of crowded detections. For example, OR-CNN [8] proposes aggregation loss to enforce proposals to be close to the corresponding objects and minimize the internal region distances of proposals associated with the same objects. RepLoss [19] proposes Repulsion Loss, which introduces extra penalty to proposals intertwined with multiple ground truths. Moreover, some advanced NMS strategies [20–23, 18] are proposed to alleviate the crowded issues to some extent, but they still take IoU as the metric to measure the difference between detected objects, which limits the performance on identifying highly-overlapped instances from crowded boxes. Object Representation. In computer vision, object representation is one primary topic, and there are many representations for objects in 2D images, such as 2D bounding boxes [4], polygons [24], splines [25], and pixels [26]. Each has strengths and weaknesses from a specific application’s practical perspective, providing annotation cost, information density, and variable levels of fidelity. Distribution-based representation has also been tried in [27] which utilizes the bivariate normal distribution as the representation of objects. However, when transformed from bounding boxes rather than segmentation, the mean and variance of bivarite normal distribution are still consistent with the center and scale. Besides, its performance is considerably poor compared to other methods. In this paper, Beta Representation provides a more detailed representation for occluded pedestrians, along with a new metric to substitute for IoU and a new detector Beta R-CNN, thereby alleviating the occlusion and crowd issues to a great extent. 3 Method In this section, we first introduce the parameterized Beta Representation for pedestrians. Then to fully exploit the Beta Representation, a novel pipeline Beta R-CNN is proposed. Moreover, a specific NMS strategy based on beta distribution and KL divergence, i.e., BetaNMS, is analyzed in detail. 3.1 Beta Representation 3.1.1 Beta Distribution In probability theory and mathematical statistics, the beta distribution is a family of one-dimensional continuous probability distribution defined in the interval [0, 1], parameterized by two positive shape parameters α and β. For 0 ≤ x ≤ 1 and shape parameters α, β > 0, the probability density function (PDF) of beta distribution is a exponential function of the variable x and its reflection (1 − x) as follows: Be(x;α, β) = Γ(α+ β) Γ(α)Γ(β) · x(α−1)(1− x)(β−1) = 1 B(α, β) · x(α−1)(1− x)(β−1), (1) where Γ(z) is the gamma function andB(α, β) is a normalization factor to ensure the total probability is 1. Some beta distribution samples are shown in Fig. 1. According to the above definition, the mean µ, variance σ2 and shape parameter ν can be formulated as follows: µ = E(x) = α α+ β , σ2 = E(x− µ)2 = αβ (α+ β)2(α+ β + 1) , ν = α+ β. (2) 3.1.2 Beta Representation for Pedestrian As introduced in Sec. 3.1.1, the beta distribution has two key characteristics: 1) Boundedness, the beta distribution is defined in the interval [0, 1]; 2) Asymmetry, the peak and FWHM can be controlled by parameters α and β. These two characteristics make beta distribution suitable to describe the location, shape and visible pattern of occluded pedestrians. Parameterized Beta Representation is generated from the two annotated boxes, i.e., full-body and visible boxes. Considering bounding box is a 2D representation and it is always axis-aligned, we utilize two independent beta distributions on the x-axis and y-axis respectively. As mentioned before, we take the full-body box as the boundary of 2D beta distribution, while the peak along with FWHM depends on the relation between the visible part and full-body box. However, the transition relation between the peak, FWHM and the parameters α, β is hard to formulate. Instead, we calculate the mean and variance of the beta distribution with different weights assigned to the visible part and non-visible part, formulated as follows: µx = ∫ rf lf xf(x)dx∫ rf lf f(x)dx , σx 2 = ∫ rf lf (x− µx)2f(x)dx∫ rf lf f(x)dx , µy = ∫ bf tf yf(y)dy∫ bf tf f(y)dy , σy 2 = ∫ bf tf (y − µy)2f(y)dy∫ bf tf f(y)dy , (3) where [lf , tf , rf , bf ], [lv, tv, rv, bv] denote the full-body box and visible box respectively, and f(x) is defined as the weight of each pixel based on the visibility: f(x) = { Wv, lv ≤ x ≤ rv Wf , others , f(y) = { Wv, tv ≤ y ≤ bv Wf , others , (4) where Wf = 0.04,Wv = 1 in our experiments and the size of visible box can be approximated as wv = ρσx, hv = ρσy (ρ = √ 12). Finally, we can calculate the parameters α, β according to the normalized mean and variance, while λ (set to ρ/4) is a constant to keep α, β > 1: µx = µx − l r − l , µy = µy − t b− t , σx = λ · σx r − l , σy = λ · σy b− t , νx = αx + βx = µx(1 + µx) σ2x − 1, νy = αy + βy = µy(1 + µy) σ2y − 1, αx = µxνx = µx( µx(1 + µx) σ2x − 1), αy = µyνy = µy( µy(1 + µy) σ2y − 1), βx = (1− µx)νx, βy = (1− µy)νy. (5) Generally speaking, for each pedestrian, Beta Representation is parameterized by eight parameters, i.e., [l, t, r, b, αx, βx, αy, βy], where[l, t, r, b] are the boundaries indicating the location on the image, and [αx, βx, αy, βy] are the shape parameters of the 2D beta distribution describing the visibility of pedestrians. The probability density function of the 2D beta distribution over the whole image is formulated as follows: P (x, y) = { C ·Be(x̄; αx, βx) ·Be(ȳ; αy, βy), l ≤ x ≤ r, t ≤ y ≤ b, 0, others, (6) where x̄ = (x− l)/(r − l), ȳ = (y − t)/(b− t), and C is a normalization factor to keep the sum of PDF to 1. For pixels inside the beta boundary, the probability values are consistent with the product of two one-dimensional beta distribution, otherwise the probability values are set to zeros. Backbone RPN (Beta Head) Class Box RoI Pooling Beta Head Class Beta Beta Mask RoI Pooling Beta Head Class Beta 3.1.3 Advantages Our proposed Beta Representation shows several impressive advantages. Firstly, it is more precise in terms of the shape and visibility of pedestrians compared with box representation. While the bounding box models the object as a uniform distribution inside the box, 2D beta distribution concentrates more on the center of visual mass. Secondly, compared with the paired boxes, i.e., full-body box along with visible box, 2D beta distribution treats the pedestrian more like an integrated whole rather than two individual parts. Last, it can handle a few problematic situations such as identifying highly-occluded and highly-overlapped objects, which will be discussed in detail. Moreover, it is worth mentioning that pixel-wise annotations in segmentation can also be transformed to the parameterized Beta Representation based on the above equations. 3.2 Beta R-CNN To better implement the Beta Representation, we introduce a new detector named Beta R-CNN inspired by Faster R-CNN [4] and Cascade R-CNN [28]. The architecture is shown in Fig. 3. BetaHead and BetaMask are two core modules in Beta R-CNN. In the following section, we will discuss them respectively. 3.2.1 BetaHead Since we adopt Beta Representation to describe a pedestrian, BetaHead is designed to regress the eight beta parameters, i.e., [l, t, r, b, αx, βx, αy, βy], which is analogous to the regression head in vanilla Faster R-CNN. Specifically, as α, β are too abstractive to learn, we adopt the mean and variance as regression targets, i.e., [l, t, r, b, µx, µy, σx, σy]. The four boundary parameters, i.e., [l, t, r, b], utilize the same normalization strategy introduced in [4]. And for the other four shape parameters, i.e., [µx, µy, σx, σy], we adopt the normalization as follows: tµx = (µx − xa)/wa, tµy = (µy − ya)/ha, tσx = log(σx/wa), tσy = log(σy/ha), t∗µx = (µ ∗ x − xa)/wa, t∗µy = (µ ∗ y − ya)/ha, t∗σx = log(σ ∗ x/wa), t ∗ σy = log(σ ∗ y/ha), (7) where x, y, w, h denote the center coordinates and size of the boundary; µx, σx, µy, σy denote the mean and variance of the object; µ and µ∗ stand for the predicted and ground-truth beta respectively, while subscript a denotes the anchor box. SmoothL1 loss is adopted to optimize the BetaHead. 3.2.2 BetaMask BetaMask is another novel module introduced in Beta R-CNN. Most pedestrian detectors treat the whole extracted features of a person equally important, which will result in poor performance for high-occluded scenes due to the obvious noise. As we introduced in Sec. 3.1, Beta Representation itself has different focuses to picture a person, which emphasizes the visible part in occluded scenes. It is very intuitive to adopt attention mechanism with 2D beta distribution to highlight the features of visible parts and suppress other noise simultaneously, which could induce the network to pay more attention to the discriminative features and achieve better localization accuracy and higher confidence. Different from the common attention mechanism, our proposed BetaMask is based on 2D beta distribution, which is more targeted. In this paper, we directly generate the mask based on prediction results of the previous BetaHead instead of a CNN module like [16], as the beta mask is more like a parameterized probability distribution and it is difficult to keep the consistency of the distribution with convolutional kernels. Referring to equation (5), we get [αx, βx, αy, βy] from the predicted [l, t, r, b, µx, µy, σx, σy], and the mask values are sampled from the 2D beta distribution Be(x, y;αx, βx, αy, βy) = C · Be(x;αx, βx) · Be(y;αy, βy). Then we utilize the element-wise product to modulate the pooled feature with sampled beta masks. Finally, we use KL divergence as the loss function to supervise the BetaMask module: Lmask = ΣBe ∗(x, y)(logBe∗(x, y)− logBe(x, y)), (8) where Be∗(x, y) refers to the distribution generated from the ground truth, while Be(x, y) is generated from the predicted beta parameters. 3.3 BetaNMS When it comes to NMS, instead of taking IoU as the metric to measure the difference between detected objects, we follow [27] to utilize KL divergence as an alternative, but based on 2D beta distribution rather than bivariate normal distribution in [27]. KL divergence is defined as follows: DKL(p||q) = ∑ x,y p(x, y)(log(p(x, y))− log(q(x, y))), (9) where p and q refer to two parameterized distributions. In practice, to keep the symmetry of the distance metric, we adopt the symmetrified KL divergence D̄KL(p||q) as: D̄KL(p||q) = (DKL(p||q) +DKL(q||p))/2. (10) Figure. 4 shows significant differences between symmetrized KL divergence metric and IoU metric on the CrowdHuman validation set. Each dot stands for a pair of two overlapped (fIoU > 0) pedestrians in the same scene, while there are 206088 dots in each graph. When we adopt KL divergence and IoU to perform non-maximum suppression between the above paired boxes respectively, we find only 2844 failed cases based on KL divergence while there are more than 10000 failed cases based on IoU neither fIoU nor vIoU. The comparisons actively demonstrate the superiority of our proposed Beta Representation and the BetaNMS strategy. More details will be shown in experiments. 4 Experiment 4.1 Datasets CityPersons Dataset. The CityPersons dataset [2] is a subset of Cityscapes which only consists of person annotations. There are 2975 images for training, 500 and 1575 images for validation and testing. The average number of pedestrians in an image is 7. We evaluate our proposed method under the full-body setting, following the evaluation protocol in [2], and the partition of validation set follows the standard setting in [19] on account of visibility: Heavy [0, 0.65], Partial [0.65, 0.9], Bare [0.9, 1], Reasonable [0.65, 1]. CrowdHuman Dataset. The CrowdHuman dataset [1], has been recently released to specifically target the crowd issue in the human detection task. There are 15000, 4370, and 5000 images in the training, validation, and testing set respectively. The average number of persons in an image is 22.6, which is much more crowded than other pedestrian datasets. All the experiments are trained on the CrowdHuman training set and evaluated on the validation set. Evaluation Metric. AP (Averaged Precision), which is the most popular metric for detection. AP reflects both the precision and recall ratios of the detection results. Larger AP indicates better performance. MR−2, which is short for log-average Miss Rate on False Positive Per Image (FPPI) in [29], is commonly used in pedestrian detection. Smaller MR−2 indicates better performance. MR−2 emphasizes FP and FN more than AP, which are critical in pedestrian detection. 4.2 Implementation Details In this paper, we adopt Feature Pyramid Network (FPN) [30] with ResNet-50 [31] as the backbone for all the experiments. The two-stage Cascade R-CNN [28] is taken as our baseline detection framework to perform coarse-to-fine mechanism for more accurate beta prediction. As for anchor settings, we follow the same anchor scales in [30], while the aspect ratios are set to H : W = {1 : 1, 2 : 1, 3 : 1}. For training, the batch size is 16, split to 8 GPUs. Each training round includes 16000 iterations on CityPersons and 40000 iterations on CrowdHuman. The learning rate is initialized to 0.02 and divided by 10 at half and three-quarter of total iterations respectively. During training, the sampling ratio of positive to negative proposals for RoI branch is 1 : 1 for CrowdHuman and 1 : 4 for CityPersons. On CityPersons, the input size for both training and testing is 1024× 2048. On CrowdHuman, the short edge of each image is resized to 800 pixels for both training and testing. It is worth mentioning that the proposed components like BetaHead in Beta R-CNN are all optimization-friendly, thus there is no essential difference between Beta R-CNN and Faster R-CNN [4] or Cascade R-CNN [28] for model training and testing. 4.3 Ablation Study on CrowdHuman Ablation study and main results. Table 1 shows the ablation experiments of the proposed Beta R-CNN in Sec. 3, including BetaHead, BetaMask, Mask Loss, and BetaNMS. The baseline is a two-stage Cascade R-CNN with default settings introduced in Sec. 4.2. As we claimed in Sec. 3, it is clear that our method consistently improves the performance in all criteria. BetaHead and BetaMask are proposed to implement Beta Representation and alleviate the occluded issue with new regression targets and attention mechanism, which surely reduce the MR−2 from 43.8% to 41.3% and improve AP from 85.2% to 87.1%. And the Mask Loss, i.e., equation 8, helps model get a more accurate mask. Moreover, the improvement of BetaNMS well demonstrates the superiority over the IoU-based NMS. We further analyze the role of each module. Beta Representation could picture more details of the shape and visibility of pedestrians especially in occluded and crowded scenes, and BetaMask adopts attention mechanism by utilizing 2D beta distribution to modulate more discriminative features, which enhances Beta R-CNN further. At last, BetaNMS eliminates the inherent drawback of IoU-based NMS when it meets highly-overlapped instances under crowded scenes. More details can be found in Sec. 3. Comparison with various NMS strategies. To powerfully illustrate the effectiveness of the BetaNMS, we compare BetaNMS with IoU-based NMS on full-body/visible boxes (visible boxes are approximately transformed from Beta Representation). Results are shown in Table 2 and all reported experiments here are based on Beta R-CNN. BetaNMS outperforms all other NMS methods with a large margin. Compared with fIoU-, vIoU-based NMS tends to recall more overlapped instances but bring in more false positives meanwhile, reflecting in the higher MR−2 and AP. In addition, although we integrate fIoU and vIoU in NMS, we can find BetaNMS still outperforms by at least 0.4% on MR−2 and 1.5% on AP, which means BetaNMS surely better distinguishes highly overlapped instances than IoU-based NMS, whether it is based on the full-body box or visible box or both. Speed/accuracy trade off. Each proposed module in Beta R-CNN is light-weight with little computation cost. We take CrowdHuman validation set with 800x1400 input size to conduct speed experiments on NVIDIA 2080Ti GPU with 8 GPUs, and the average speeds are 0.483s/image ( Cascade R-CNN baseline) and 0.487s/image (Beta R-CNN) respectively. The difference can be negligible. 4.4 State-of-the-art (SOTA) Comparison on CrowdHuman Comparisons with some recent methods on the CrowdHuman validation set are shown in Table 3. It clearly shows that our Beta R-CNN outperforms others with a large margin, especially on the metric MR−2. Such a large gap demonstrates the superiority of our Beta R-CNN. It is worth noting that CrowdDet [32] achieves a little higher AP than ours, which attributes to its motivation, i.e., laying emphasis on larger recall at the expense of more false positives, reflecting in higher MR−2 than ours. 4.5 Experiments on CityPersons To further verify the generalization ability of our method, we also conduct experiments on CityPersons. Table 4 compares Beta R-CNN with some state-of-the-art methods. For a fair comparison, we only list those methods that follow the standard settings, i.e., adopting subset partition criterion in [19] and feeding images with original size as inputs when performing evaluation. Because of the space limit, we will report the results with 1.3x enlarged input images in our supplementary materials. From the table, we can see that our Beta R-CNN outperforms all published methods on all four subsets, especially with a large margin on the Heavy subset, which verifies that our method is effective in occluded and crowded scenes. 5 Conclusion In this paper, we propose a statistic representation for occluded pedestrians based on 2D beta distributions, which takes the paired boxes as an integrated whole and emphasize the object center of visual mass. Besides, Beta R-CNN, equipped with BetaHead and BetaMask, aims to alleviate the pedestrian detection in occluded and crowded scenes. BetaNMS could effectively distinguish highly-overlapped instances based on Beta Representation and KL divergence. The quantitative and qualitative experiments powerfully demonstrate the superiority of our methods. Beta Representation, as well as BetaHead, BetaMask, BetaNMS are all flexible enough to be integrated into other two-stage or single-shot detectors and are also compatible with existing optimization methods to further boost their performance. Moreover, our method could be extended to more general scenes and other detection tasks. Acknowledgements This work was supported in part by the National Key Research and Development Program of China under Grant 2016QY02D0304 and the National Natural Science Foundation of China under Grant 60572002. Broader Impact Our contributions focus on the novel representation and pipeline for pedestrian detection, which can be extended to other computer vision tasks. Also, it may provide new ideas for follow-up research. It therefore has the potential to advance both the beneficial and harmful applications of object detectors, such as autonomous vehicles, intelligent video surveillance, robotics and so on. As for ethical aspects and future societal consequences, this technology can bring harmful or beneficial effects to the society, which depends on the citizens who have evil or pure motivation and who can make good use of this technological progress.
1. What is the focus and contribution of the paper on pedestrian detection? 2. What are the strengths of the proposed approach, particularly in terms of its novel representation and NMS strategy? 3. What are the weaknesses of the paper, especially regarding the ablation study and the confusion in the output of the Be module? 4. Do you have any concerns about the calculation of the relation between the visible part and the full-body? 5. What are the limitations of the BetaNMS module, and how does it compare to other NMS methods such as softNMS?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed Beta Representation for pedestrian detection. Combined with BetaNMS, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes. This module is validated on the CityPersons and CrowdHuman datasets. Strengths The proposed Bate Representation is a novel representation for pedestrian, which has reached state-of-the-art results on CrowdHuman and CityPersons datasets. KL divergence is used instead of IoU to measure the distance between two objects and proposed new NMS strategy BetaNMS, and verified its effectiveness in ablation experiments. Weaknesses 1. In the ablation study on CrowdHuman, the MaskLoss did not play an effective role, whether to consider designing other loss functions, or remove. 2. In the figure 3, the output of the Be module is confusing. The visualization shows the mask of the BM module, but the output of the Be module is not explained clearly. 3. How to calculate the relation between the visible part and the full-body requires a more detailed explanation, and a full analysis of the reasons for the effectiveness of this relation. 4. For the BetaNMS module, it is recommanded that to add a comparative test using softNMS.
NIPS
Title Beta R-CNN: Looking into Pedestrian Detection from Another Perspective Abstract Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be attributed mostly to the widely used representation of pedestrians, i.e., 2D axis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrian by explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability values to pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What’s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes. Code will be released at github.com/Guardian44x/Beta-R-CNN. 1 Introduction Pedestrian detection is a critical research topic in computer vision field with various real-world applications such as autonomous vehicles, intelligent video surveillance, robotics, and so on. During the last decade, with the rise of deep convolutional neural networks (CNNs), great progress has been achieved in pedestrian detection. However, it remains challenging to accurately distinguish pedestrians in occluded and crowded scenes. Although extensive methods have been attempted for occlusion and crowd issues, the performance is still limited by pedestrian representation, i.e., 2D bounding box representation. The axis-aligned minimum bounding box is widely utilized to explicitly define a distinct object, with its approximate location and size. Although box representation has advantages such as parameterization- and annotation-friendly as the identity of an object, some nonnegligible drawbacks are limiting the performance of pedestrian detection especially in occluded and crowded scenes. Firstly, the bounding box can be regarded as modeling the object as a uniform distribution in the box, but it actually goes against our intuitive perception. Given an occluded pedestrian, what attracts our attention should be the visible part rather than the occluded noise. Secondly, based on box representation, intersection ∗These authors contributed equally 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. BBox representation 2-value Mask Beta Representation fIoU:0.74, vIoU:0.21, KL:9.95 fIoU:0.68, vIoU:0.31, KL:10.34 fIoU:0.61, vIoU:0.45, KL:8.28 fIoU:0.84, vIoU:0.19, KL:12.47 Full Box Visible Box over union (IoU) serves as the metric to measure the difference between objects, which results in difficulty to distinguish highly-overlapped instances in crowded scenes. As shown in Fig. 2, even if the detectors succeed to identify different human instances in a crowded scene, the highly-overlapped detections may also be suppressed by the post-processing of non-maximum suppression (NMS). Last, the full-body and visible boxes treat a distinct person as two separate parts, which omit their inner relationship as a whole and lead to difficulty for model optimization. To eliminate the weaknesses of box representation and preserve its advantages in the meanwhile, we propose a novel representation for pedestrians based on 2D beta distribution, named Beta Representation. In probability theory, the beta distribution is a family of continuous probability distribution defined in the interval [0, 1], as depicted in Fig. 1. By assigning different values to α, β, we could control the shape of the beta distribution, especially the peak and the full width at half maximum (FWHM), which is naturally suitable for pedestrian representation with unpredictable visible patterns. We take each pedestrian as a 2D beta distribution on the image and generate eight new parameters as the Beta Representation. As illustrated in Fig. 2, the boundary of 2D beta distribution is consistent with the full-body box, while the peak along with FWHM depends on the relation between the visible part and full-body box. Compared with paired boxes, i.e., full-body and visible boxes, 2D beta distribution treats each pedestrian more like an integrated whole and emphasizes the object center of visual mass meanwhile. Besides, instead of IoU, Kullback-Leibler (KL) divergence is adopted as a new metric to measure the distance of two objects and the beta-distribution-based NMS strategy is named BetaNMS. Fig. 2 illustrates that while the bounding boxes are too close to distinguish (fIoU > 0.5, vIoU > 0.32), the 2D beta distributions still maintain high discrimination (KL > 7) between each other, thereby leading to better performance in distinguishing highly-overlapped instances. Moreover, to fully exploit Beta Representation in pedestrian detection, we design a novel pedestrian detector named Beta R-CNN, equipped with two different key modules, i.e., BetaHead and BetaMask. BetaHead is utilized to regress the eight beta parameters and the class score, while BetaMask serves as an attention mechanism to modulate the extracted feature with beta-distribution-based masks. Experiments on the extremely crowded benchmark CrowdHuman [1] and CityPersons [2] show that our proposed approach can outperform the state-of-the-art results, which strongly validate the superiority of our method. 2 Related Work Pedestrian Detection. Pedestrian detection can be viewed as object detection for the specific category. With the development of deep learning, CNN-based detectors can be roughly divided into two categories: the two-stage approaches [3, 4] comprise separate proposal generation followed by classification and regression module to refine the proposals; and the one-stage approaches [5–7] perform localization and classification simultaneously on the feature maps without the separate 2FIoU and vIoU are the IoU calculated based on full-body/visible boxes respectively. proposal generation module. Most existing pedestrian detection methods employ either the singlestage or two-stage strategy as their model architectures. Occlusion Handling. In pedestrian detection, occlusion leads to misclassifying pedestrians. A common strategy is the part-based approaches [8–11], which ensemble a series of body-part detectors to localize partially occluded pedestrians. Also some methods train different models for most frequent occlusion patterns [12, 13] or model different occlusion patterns in a joint framework [14, 15], but they are all just designed for some specific occlusion patterns and not able to generalize well in other occluded scenes. Besides, attention mechanism has been applied to handle different occlusion patterns [9, 16]. MGAN [16] introduces a novel mask guided attention network, which emphasizes visible pedestrian regions while suppressing the occluded parts by modulating extracted features. Moreover, a few recent works [17, 18] have exploited to utilize annotations of the visible box as extra supervisions to improve pedestrian detection performance. Crowdness Handling. As for crowded scenes, except for the misclassifying issues, crowdedness makes it difficult to distinguish highly-overlapped pedestrians. A few previous works propose new loss functions to address the problem of crowded detections. For example, OR-CNN [8] proposes aggregation loss to enforce proposals to be close to the corresponding objects and minimize the internal region distances of proposals associated with the same objects. RepLoss [19] proposes Repulsion Loss, which introduces extra penalty to proposals intertwined with multiple ground truths. Moreover, some advanced NMS strategies [20–23, 18] are proposed to alleviate the crowded issues to some extent, but they still take IoU as the metric to measure the difference between detected objects, which limits the performance on identifying highly-overlapped instances from crowded boxes. Object Representation. In computer vision, object representation is one primary topic, and there are many representations for objects in 2D images, such as 2D bounding boxes [4], polygons [24], splines [25], and pixels [26]. Each has strengths and weaknesses from a specific application’s practical perspective, providing annotation cost, information density, and variable levels of fidelity. Distribution-based representation has also been tried in [27] which utilizes the bivariate normal distribution as the representation of objects. However, when transformed from bounding boxes rather than segmentation, the mean and variance of bivarite normal distribution are still consistent with the center and scale. Besides, its performance is considerably poor compared to other methods. In this paper, Beta Representation provides a more detailed representation for occluded pedestrians, along with a new metric to substitute for IoU and a new detector Beta R-CNN, thereby alleviating the occlusion and crowd issues to a great extent. 3 Method In this section, we first introduce the parameterized Beta Representation for pedestrians. Then to fully exploit the Beta Representation, a novel pipeline Beta R-CNN is proposed. Moreover, a specific NMS strategy based on beta distribution and KL divergence, i.e., BetaNMS, is analyzed in detail. 3.1 Beta Representation 3.1.1 Beta Distribution In probability theory and mathematical statistics, the beta distribution is a family of one-dimensional continuous probability distribution defined in the interval [0, 1], parameterized by two positive shape parameters α and β. For 0 ≤ x ≤ 1 and shape parameters α, β > 0, the probability density function (PDF) of beta distribution is a exponential function of the variable x and its reflection (1 − x) as follows: Be(x;α, β) = Γ(α+ β) Γ(α)Γ(β) · x(α−1)(1− x)(β−1) = 1 B(α, β) · x(α−1)(1− x)(β−1), (1) where Γ(z) is the gamma function andB(α, β) is a normalization factor to ensure the total probability is 1. Some beta distribution samples are shown in Fig. 1. According to the above definition, the mean µ, variance σ2 and shape parameter ν can be formulated as follows: µ = E(x) = α α+ β , σ2 = E(x− µ)2 = αβ (α+ β)2(α+ β + 1) , ν = α+ β. (2) 3.1.2 Beta Representation for Pedestrian As introduced in Sec. 3.1.1, the beta distribution has two key characteristics: 1) Boundedness, the beta distribution is defined in the interval [0, 1]; 2) Asymmetry, the peak and FWHM can be controlled by parameters α and β. These two characteristics make beta distribution suitable to describe the location, shape and visible pattern of occluded pedestrians. Parameterized Beta Representation is generated from the two annotated boxes, i.e., full-body and visible boxes. Considering bounding box is a 2D representation and it is always axis-aligned, we utilize two independent beta distributions on the x-axis and y-axis respectively. As mentioned before, we take the full-body box as the boundary of 2D beta distribution, while the peak along with FWHM depends on the relation between the visible part and full-body box. However, the transition relation between the peak, FWHM and the parameters α, β is hard to formulate. Instead, we calculate the mean and variance of the beta distribution with different weights assigned to the visible part and non-visible part, formulated as follows: µx = ∫ rf lf xf(x)dx∫ rf lf f(x)dx , σx 2 = ∫ rf lf (x− µx)2f(x)dx∫ rf lf f(x)dx , µy = ∫ bf tf yf(y)dy∫ bf tf f(y)dy , σy 2 = ∫ bf tf (y − µy)2f(y)dy∫ bf tf f(y)dy , (3) where [lf , tf , rf , bf ], [lv, tv, rv, bv] denote the full-body box and visible box respectively, and f(x) is defined as the weight of each pixel based on the visibility: f(x) = { Wv, lv ≤ x ≤ rv Wf , others , f(y) = { Wv, tv ≤ y ≤ bv Wf , others , (4) where Wf = 0.04,Wv = 1 in our experiments and the size of visible box can be approximated as wv = ρσx, hv = ρσy (ρ = √ 12). Finally, we can calculate the parameters α, β according to the normalized mean and variance, while λ (set to ρ/4) is a constant to keep α, β > 1: µx = µx − l r − l , µy = µy − t b− t , σx = λ · σx r − l , σy = λ · σy b− t , νx = αx + βx = µx(1 + µx) σ2x − 1, νy = αy + βy = µy(1 + µy) σ2y − 1, αx = µxνx = µx( µx(1 + µx) σ2x − 1), αy = µyνy = µy( µy(1 + µy) σ2y − 1), βx = (1− µx)νx, βy = (1− µy)νy. (5) Generally speaking, for each pedestrian, Beta Representation is parameterized by eight parameters, i.e., [l, t, r, b, αx, βx, αy, βy], where[l, t, r, b] are the boundaries indicating the location on the image, and [αx, βx, αy, βy] are the shape parameters of the 2D beta distribution describing the visibility of pedestrians. The probability density function of the 2D beta distribution over the whole image is formulated as follows: P (x, y) = { C ·Be(x̄; αx, βx) ·Be(ȳ; αy, βy), l ≤ x ≤ r, t ≤ y ≤ b, 0, others, (6) where x̄ = (x− l)/(r − l), ȳ = (y − t)/(b− t), and C is a normalization factor to keep the sum of PDF to 1. For pixels inside the beta boundary, the probability values are consistent with the product of two one-dimensional beta distribution, otherwise the probability values are set to zeros. Backbone RPN (Beta Head) Class Box RoI Pooling Beta Head Class Beta Beta Mask RoI Pooling Beta Head Class Beta 3.1.3 Advantages Our proposed Beta Representation shows several impressive advantages. Firstly, it is more precise in terms of the shape and visibility of pedestrians compared with box representation. While the bounding box models the object as a uniform distribution inside the box, 2D beta distribution concentrates more on the center of visual mass. Secondly, compared with the paired boxes, i.e., full-body box along with visible box, 2D beta distribution treats the pedestrian more like an integrated whole rather than two individual parts. Last, it can handle a few problematic situations such as identifying highly-occluded and highly-overlapped objects, which will be discussed in detail. Moreover, it is worth mentioning that pixel-wise annotations in segmentation can also be transformed to the parameterized Beta Representation based on the above equations. 3.2 Beta R-CNN To better implement the Beta Representation, we introduce a new detector named Beta R-CNN inspired by Faster R-CNN [4] and Cascade R-CNN [28]. The architecture is shown in Fig. 3. BetaHead and BetaMask are two core modules in Beta R-CNN. In the following section, we will discuss them respectively. 3.2.1 BetaHead Since we adopt Beta Representation to describe a pedestrian, BetaHead is designed to regress the eight beta parameters, i.e., [l, t, r, b, αx, βx, αy, βy], which is analogous to the regression head in vanilla Faster R-CNN. Specifically, as α, β are too abstractive to learn, we adopt the mean and variance as regression targets, i.e., [l, t, r, b, µx, µy, σx, σy]. The four boundary parameters, i.e., [l, t, r, b], utilize the same normalization strategy introduced in [4]. And for the other four shape parameters, i.e., [µx, µy, σx, σy], we adopt the normalization as follows: tµx = (µx − xa)/wa, tµy = (µy − ya)/ha, tσx = log(σx/wa), tσy = log(σy/ha), t∗µx = (µ ∗ x − xa)/wa, t∗µy = (µ ∗ y − ya)/ha, t∗σx = log(σ ∗ x/wa), t ∗ σy = log(σ ∗ y/ha), (7) where x, y, w, h denote the center coordinates and size of the boundary; µx, σx, µy, σy denote the mean and variance of the object; µ and µ∗ stand for the predicted and ground-truth beta respectively, while subscript a denotes the anchor box. SmoothL1 loss is adopted to optimize the BetaHead. 3.2.2 BetaMask BetaMask is another novel module introduced in Beta R-CNN. Most pedestrian detectors treat the whole extracted features of a person equally important, which will result in poor performance for high-occluded scenes due to the obvious noise. As we introduced in Sec. 3.1, Beta Representation itself has different focuses to picture a person, which emphasizes the visible part in occluded scenes. It is very intuitive to adopt attention mechanism with 2D beta distribution to highlight the features of visible parts and suppress other noise simultaneously, which could induce the network to pay more attention to the discriminative features and achieve better localization accuracy and higher confidence. Different from the common attention mechanism, our proposed BetaMask is based on 2D beta distribution, which is more targeted. In this paper, we directly generate the mask based on prediction results of the previous BetaHead instead of a CNN module like [16], as the beta mask is more like a parameterized probability distribution and it is difficult to keep the consistency of the distribution with convolutional kernels. Referring to equation (5), we get [αx, βx, αy, βy] from the predicted [l, t, r, b, µx, µy, σx, σy], and the mask values are sampled from the 2D beta distribution Be(x, y;αx, βx, αy, βy) = C · Be(x;αx, βx) · Be(y;αy, βy). Then we utilize the element-wise product to modulate the pooled feature with sampled beta masks. Finally, we use KL divergence as the loss function to supervise the BetaMask module: Lmask = ΣBe ∗(x, y)(logBe∗(x, y)− logBe(x, y)), (8) where Be∗(x, y) refers to the distribution generated from the ground truth, while Be(x, y) is generated from the predicted beta parameters. 3.3 BetaNMS When it comes to NMS, instead of taking IoU as the metric to measure the difference between detected objects, we follow [27] to utilize KL divergence as an alternative, but based on 2D beta distribution rather than bivariate normal distribution in [27]. KL divergence is defined as follows: DKL(p||q) = ∑ x,y p(x, y)(log(p(x, y))− log(q(x, y))), (9) where p and q refer to two parameterized distributions. In practice, to keep the symmetry of the distance metric, we adopt the symmetrified KL divergence D̄KL(p||q) as: D̄KL(p||q) = (DKL(p||q) +DKL(q||p))/2. (10) Figure. 4 shows significant differences between symmetrized KL divergence metric and IoU metric on the CrowdHuman validation set. Each dot stands for a pair of two overlapped (fIoU > 0) pedestrians in the same scene, while there are 206088 dots in each graph. When we adopt KL divergence and IoU to perform non-maximum suppression between the above paired boxes respectively, we find only 2844 failed cases based on KL divergence while there are more than 10000 failed cases based on IoU neither fIoU nor vIoU. The comparisons actively demonstrate the superiority of our proposed Beta Representation and the BetaNMS strategy. More details will be shown in experiments. 4 Experiment 4.1 Datasets CityPersons Dataset. The CityPersons dataset [2] is a subset of Cityscapes which only consists of person annotations. There are 2975 images for training, 500 and 1575 images for validation and testing. The average number of pedestrians in an image is 7. We evaluate our proposed method under the full-body setting, following the evaluation protocol in [2], and the partition of validation set follows the standard setting in [19] on account of visibility: Heavy [0, 0.65], Partial [0.65, 0.9], Bare [0.9, 1], Reasonable [0.65, 1]. CrowdHuman Dataset. The CrowdHuman dataset [1], has been recently released to specifically target the crowd issue in the human detection task. There are 15000, 4370, and 5000 images in the training, validation, and testing set respectively. The average number of persons in an image is 22.6, which is much more crowded than other pedestrian datasets. All the experiments are trained on the CrowdHuman training set and evaluated on the validation set. Evaluation Metric. AP (Averaged Precision), which is the most popular metric for detection. AP reflects both the precision and recall ratios of the detection results. Larger AP indicates better performance. MR−2, which is short for log-average Miss Rate on False Positive Per Image (FPPI) in [29], is commonly used in pedestrian detection. Smaller MR−2 indicates better performance. MR−2 emphasizes FP and FN more than AP, which are critical in pedestrian detection. 4.2 Implementation Details In this paper, we adopt Feature Pyramid Network (FPN) [30] with ResNet-50 [31] as the backbone for all the experiments. The two-stage Cascade R-CNN [28] is taken as our baseline detection framework to perform coarse-to-fine mechanism for more accurate beta prediction. As for anchor settings, we follow the same anchor scales in [30], while the aspect ratios are set to H : W = {1 : 1, 2 : 1, 3 : 1}. For training, the batch size is 16, split to 8 GPUs. Each training round includes 16000 iterations on CityPersons and 40000 iterations on CrowdHuman. The learning rate is initialized to 0.02 and divided by 10 at half and three-quarter of total iterations respectively. During training, the sampling ratio of positive to negative proposals for RoI branch is 1 : 1 for CrowdHuman and 1 : 4 for CityPersons. On CityPersons, the input size for both training and testing is 1024× 2048. On CrowdHuman, the short edge of each image is resized to 800 pixels for both training and testing. It is worth mentioning that the proposed components like BetaHead in Beta R-CNN are all optimization-friendly, thus there is no essential difference between Beta R-CNN and Faster R-CNN [4] or Cascade R-CNN [28] for model training and testing. 4.3 Ablation Study on CrowdHuman Ablation study and main results. Table 1 shows the ablation experiments of the proposed Beta R-CNN in Sec. 3, including BetaHead, BetaMask, Mask Loss, and BetaNMS. The baseline is a two-stage Cascade R-CNN with default settings introduced in Sec. 4.2. As we claimed in Sec. 3, it is clear that our method consistently improves the performance in all criteria. BetaHead and BetaMask are proposed to implement Beta Representation and alleviate the occluded issue with new regression targets and attention mechanism, which surely reduce the MR−2 from 43.8% to 41.3% and improve AP from 85.2% to 87.1%. And the Mask Loss, i.e., equation 8, helps model get a more accurate mask. Moreover, the improvement of BetaNMS well demonstrates the superiority over the IoU-based NMS. We further analyze the role of each module. Beta Representation could picture more details of the shape and visibility of pedestrians especially in occluded and crowded scenes, and BetaMask adopts attention mechanism by utilizing 2D beta distribution to modulate more discriminative features, which enhances Beta R-CNN further. At last, BetaNMS eliminates the inherent drawback of IoU-based NMS when it meets highly-overlapped instances under crowded scenes. More details can be found in Sec. 3. Comparison with various NMS strategies. To powerfully illustrate the effectiveness of the BetaNMS, we compare BetaNMS with IoU-based NMS on full-body/visible boxes (visible boxes are approximately transformed from Beta Representation). Results are shown in Table 2 and all reported experiments here are based on Beta R-CNN. BetaNMS outperforms all other NMS methods with a large margin. Compared with fIoU-, vIoU-based NMS tends to recall more overlapped instances but bring in more false positives meanwhile, reflecting in the higher MR−2 and AP. In addition, although we integrate fIoU and vIoU in NMS, we can find BetaNMS still outperforms by at least 0.4% on MR−2 and 1.5% on AP, which means BetaNMS surely better distinguishes highly overlapped instances than IoU-based NMS, whether it is based on the full-body box or visible box or both. Speed/accuracy trade off. Each proposed module in Beta R-CNN is light-weight with little computation cost. We take CrowdHuman validation set with 800x1400 input size to conduct speed experiments on NVIDIA 2080Ti GPU with 8 GPUs, and the average speeds are 0.483s/image ( Cascade R-CNN baseline) and 0.487s/image (Beta R-CNN) respectively. The difference can be negligible. 4.4 State-of-the-art (SOTA) Comparison on CrowdHuman Comparisons with some recent methods on the CrowdHuman validation set are shown in Table 3. It clearly shows that our Beta R-CNN outperforms others with a large margin, especially on the metric MR−2. Such a large gap demonstrates the superiority of our Beta R-CNN. It is worth noting that CrowdDet [32] achieves a little higher AP than ours, which attributes to its motivation, i.e., laying emphasis on larger recall at the expense of more false positives, reflecting in higher MR−2 than ours. 4.5 Experiments on CityPersons To further verify the generalization ability of our method, we also conduct experiments on CityPersons. Table 4 compares Beta R-CNN with some state-of-the-art methods. For a fair comparison, we only list those methods that follow the standard settings, i.e., adopting subset partition criterion in [19] and feeding images with original size as inputs when performing evaluation. Because of the space limit, we will report the results with 1.3x enlarged input images in our supplementary materials. From the table, we can see that our Beta R-CNN outperforms all published methods on all four subsets, especially with a large margin on the Heavy subset, which verifies that our method is effective in occluded and crowded scenes. 5 Conclusion In this paper, we propose a statistic representation for occluded pedestrians based on 2D beta distributions, which takes the paired boxes as an integrated whole and emphasize the object center of visual mass. Besides, Beta R-CNN, equipped with BetaHead and BetaMask, aims to alleviate the pedestrian detection in occluded and crowded scenes. BetaNMS could effectively distinguish highly-overlapped instances based on Beta Representation and KL divergence. The quantitative and qualitative experiments powerfully demonstrate the superiority of our methods. Beta Representation, as well as BetaHead, BetaMask, BetaNMS are all flexible enough to be integrated into other two-stage or single-shot detectors and are also compatible with existing optimization methods to further boost their performance. Moreover, our method could be extended to more general scenes and other detection tasks. Acknowledgements This work was supported in part by the National Key Research and Development Program of China under Grant 2016QY02D0304 and the National Natural Science Foundation of China under Grant 60572002. Broader Impact Our contributions focus on the novel representation and pipeline for pedestrian detection, which can be extended to other computer vision tasks. Also, it may provide new ideas for follow-up research. It therefore has the potential to advance both the beneficial and harmful applications of object detectors, such as autonomous vehicles, intelligent video surveillance, robotics and so on. As for ethical aspects and future societal consequences, this technology can bring harmful or beneficial effects to the society, which depends on the citizens who have evil or pure motivation and who can make good use of this technological progress.
1. What is the focus and contribution of the paper regarding pedestrian detection in crowded scenes? 2. What are the strengths of the proposed approach, particularly in terms of the beta representation and BetaNMS method? 3. What are the weaknesses of the paper, especially regarding the lack of reports on speed, cost, and analysis of individual components? 4. Do you have any concerns about the soundness of the claims made in the paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions To deal with the pedestrian detection in crowded scenes, the authors propose to use 2D beta distribution to model the full-body box and visible box. The proposed Beta Representation is further used in the new NMS method, which removes the duplicate predictions according to the KL divergence. The authors validate their method on the mainstream and challenging CrowdHuman and CityPersons datasets. Strengths - The main idea of using the beta distribution to construct the relationship of full-body and visible box is novel and interesting. - I especially like the solution of BetaNMS, which shows successful detections while the original IoU-based NMS fails. As we know, the NMS is a main obstacle for detecting pedestrians in a crowd. - The results on CrowdHuman and CityPersons datasets are good. Weaknesses 1. There is no speed or cost report which can show the overhead of each component and the final speed/accuracy trade-off. 2. The authors use a cascade framework, as in Fig3. They should clarify that wether the improvements come from the cascade design. 3. Lack of analysis of how each component contributes, e.g., the BetaHead, BetaMask and BetaLoss in Table 1. 4. The above concerns make me confused about the soundness of the claims.
NIPS
Title Beta R-CNN: Looking into Pedestrian Detection from Another Perspective Abstract Recently significant progress has been made in pedestrian detection, but it remains challenging to achieve high performance in occluded and crowded scenes. It could be attributed mostly to the widely used representation of pedestrians, i.e., 2D axis-aligned bounding box, which just describes the approximate location and size of the object. Bounding box models the object as a uniform distribution within the boundary, making pedestrians indistinguishable in occluded and crowded scenes due to much noise. To eliminate the problem, we propose a novel representation based on 2D beta distribution, named Beta Representation. It pictures a pedestrian by explicitly constructing the relationship between full-body and visible boxes, and emphasizes the center of visual mass by assigning different probability values to pixels. As a result, Beta Representation is much better for distinguishing highly-overlapped instances in crowded scenes with a new NMS strategy named BetaNMS. What’s more, to fully exploit Beta Representation, a novel pipeline Beta R-CNN equipped with BetaHead and BetaMask is proposed, leading to high detection performance in occluded and crowded scenes. Code will be released at github.com/Guardian44x/Beta-R-CNN. 1 Introduction Pedestrian detection is a critical research topic in computer vision field with various real-world applications such as autonomous vehicles, intelligent video surveillance, robotics, and so on. During the last decade, with the rise of deep convolutional neural networks (CNNs), great progress has been achieved in pedestrian detection. However, it remains challenging to accurately distinguish pedestrians in occluded and crowded scenes. Although extensive methods have been attempted for occlusion and crowd issues, the performance is still limited by pedestrian representation, i.e., 2D bounding box representation. The axis-aligned minimum bounding box is widely utilized to explicitly define a distinct object, with its approximate location and size. Although box representation has advantages such as parameterization- and annotation-friendly as the identity of an object, some nonnegligible drawbacks are limiting the performance of pedestrian detection especially in occluded and crowded scenes. Firstly, the bounding box can be regarded as modeling the object as a uniform distribution in the box, but it actually goes against our intuitive perception. Given an occluded pedestrian, what attracts our attention should be the visible part rather than the occluded noise. Secondly, based on box representation, intersection ∗These authors contributed equally 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. BBox representation 2-value Mask Beta Representation fIoU:0.74, vIoU:0.21, KL:9.95 fIoU:0.68, vIoU:0.31, KL:10.34 fIoU:0.61, vIoU:0.45, KL:8.28 fIoU:0.84, vIoU:0.19, KL:12.47 Full Box Visible Box over union (IoU) serves as the metric to measure the difference between objects, which results in difficulty to distinguish highly-overlapped instances in crowded scenes. As shown in Fig. 2, even if the detectors succeed to identify different human instances in a crowded scene, the highly-overlapped detections may also be suppressed by the post-processing of non-maximum suppression (NMS). Last, the full-body and visible boxes treat a distinct person as two separate parts, which omit their inner relationship as a whole and lead to difficulty for model optimization. To eliminate the weaknesses of box representation and preserve its advantages in the meanwhile, we propose a novel representation for pedestrians based on 2D beta distribution, named Beta Representation. In probability theory, the beta distribution is a family of continuous probability distribution defined in the interval [0, 1], as depicted in Fig. 1. By assigning different values to α, β, we could control the shape of the beta distribution, especially the peak and the full width at half maximum (FWHM), which is naturally suitable for pedestrian representation with unpredictable visible patterns. We take each pedestrian as a 2D beta distribution on the image and generate eight new parameters as the Beta Representation. As illustrated in Fig. 2, the boundary of 2D beta distribution is consistent with the full-body box, while the peak along with FWHM depends on the relation between the visible part and full-body box. Compared with paired boxes, i.e., full-body and visible boxes, 2D beta distribution treats each pedestrian more like an integrated whole and emphasizes the object center of visual mass meanwhile. Besides, instead of IoU, Kullback-Leibler (KL) divergence is adopted as a new metric to measure the distance of two objects and the beta-distribution-based NMS strategy is named BetaNMS. Fig. 2 illustrates that while the bounding boxes are too close to distinguish (fIoU > 0.5, vIoU > 0.32), the 2D beta distributions still maintain high discrimination (KL > 7) between each other, thereby leading to better performance in distinguishing highly-overlapped instances. Moreover, to fully exploit Beta Representation in pedestrian detection, we design a novel pedestrian detector named Beta R-CNN, equipped with two different key modules, i.e., BetaHead and BetaMask. BetaHead is utilized to regress the eight beta parameters and the class score, while BetaMask serves as an attention mechanism to modulate the extracted feature with beta-distribution-based masks. Experiments on the extremely crowded benchmark CrowdHuman [1] and CityPersons [2] show that our proposed approach can outperform the state-of-the-art results, which strongly validate the superiority of our method. 2 Related Work Pedestrian Detection. Pedestrian detection can be viewed as object detection for the specific category. With the development of deep learning, CNN-based detectors can be roughly divided into two categories: the two-stage approaches [3, 4] comprise separate proposal generation followed by classification and regression module to refine the proposals; and the one-stage approaches [5–7] perform localization and classification simultaneously on the feature maps without the separate 2FIoU and vIoU are the IoU calculated based on full-body/visible boxes respectively. proposal generation module. Most existing pedestrian detection methods employ either the singlestage or two-stage strategy as their model architectures. Occlusion Handling. In pedestrian detection, occlusion leads to misclassifying pedestrians. A common strategy is the part-based approaches [8–11], which ensemble a series of body-part detectors to localize partially occluded pedestrians. Also some methods train different models for most frequent occlusion patterns [12, 13] or model different occlusion patterns in a joint framework [14, 15], but they are all just designed for some specific occlusion patterns and not able to generalize well in other occluded scenes. Besides, attention mechanism has been applied to handle different occlusion patterns [9, 16]. MGAN [16] introduces a novel mask guided attention network, which emphasizes visible pedestrian regions while suppressing the occluded parts by modulating extracted features. Moreover, a few recent works [17, 18] have exploited to utilize annotations of the visible box as extra supervisions to improve pedestrian detection performance. Crowdness Handling. As for crowded scenes, except for the misclassifying issues, crowdedness makes it difficult to distinguish highly-overlapped pedestrians. A few previous works propose new loss functions to address the problem of crowded detections. For example, OR-CNN [8] proposes aggregation loss to enforce proposals to be close to the corresponding objects and minimize the internal region distances of proposals associated with the same objects. RepLoss [19] proposes Repulsion Loss, which introduces extra penalty to proposals intertwined with multiple ground truths. Moreover, some advanced NMS strategies [20–23, 18] are proposed to alleviate the crowded issues to some extent, but they still take IoU as the metric to measure the difference between detected objects, which limits the performance on identifying highly-overlapped instances from crowded boxes. Object Representation. In computer vision, object representation is one primary topic, and there are many representations for objects in 2D images, such as 2D bounding boxes [4], polygons [24], splines [25], and pixels [26]. Each has strengths and weaknesses from a specific application’s practical perspective, providing annotation cost, information density, and variable levels of fidelity. Distribution-based representation has also been tried in [27] which utilizes the bivariate normal distribution as the representation of objects. However, when transformed from bounding boxes rather than segmentation, the mean and variance of bivarite normal distribution are still consistent with the center and scale. Besides, its performance is considerably poor compared to other methods. In this paper, Beta Representation provides a more detailed representation for occluded pedestrians, along with a new metric to substitute for IoU and a new detector Beta R-CNN, thereby alleviating the occlusion and crowd issues to a great extent. 3 Method In this section, we first introduce the parameterized Beta Representation for pedestrians. Then to fully exploit the Beta Representation, a novel pipeline Beta R-CNN is proposed. Moreover, a specific NMS strategy based on beta distribution and KL divergence, i.e., BetaNMS, is analyzed in detail. 3.1 Beta Representation 3.1.1 Beta Distribution In probability theory and mathematical statistics, the beta distribution is a family of one-dimensional continuous probability distribution defined in the interval [0, 1], parameterized by two positive shape parameters α and β. For 0 ≤ x ≤ 1 and shape parameters α, β > 0, the probability density function (PDF) of beta distribution is a exponential function of the variable x and its reflection (1 − x) as follows: Be(x;α, β) = Γ(α+ β) Γ(α)Γ(β) · x(α−1)(1− x)(β−1) = 1 B(α, β) · x(α−1)(1− x)(β−1), (1) where Γ(z) is the gamma function andB(α, β) is a normalization factor to ensure the total probability is 1. Some beta distribution samples are shown in Fig. 1. According to the above definition, the mean µ, variance σ2 and shape parameter ν can be formulated as follows: µ = E(x) = α α+ β , σ2 = E(x− µ)2 = αβ (α+ β)2(α+ β + 1) , ν = α+ β. (2) 3.1.2 Beta Representation for Pedestrian As introduced in Sec. 3.1.1, the beta distribution has two key characteristics: 1) Boundedness, the beta distribution is defined in the interval [0, 1]; 2) Asymmetry, the peak and FWHM can be controlled by parameters α and β. These two characteristics make beta distribution suitable to describe the location, shape and visible pattern of occluded pedestrians. Parameterized Beta Representation is generated from the two annotated boxes, i.e., full-body and visible boxes. Considering bounding box is a 2D representation and it is always axis-aligned, we utilize two independent beta distributions on the x-axis and y-axis respectively. As mentioned before, we take the full-body box as the boundary of 2D beta distribution, while the peak along with FWHM depends on the relation between the visible part and full-body box. However, the transition relation between the peak, FWHM and the parameters α, β is hard to formulate. Instead, we calculate the mean and variance of the beta distribution with different weights assigned to the visible part and non-visible part, formulated as follows: µx = ∫ rf lf xf(x)dx∫ rf lf f(x)dx , σx 2 = ∫ rf lf (x− µx)2f(x)dx∫ rf lf f(x)dx , µy = ∫ bf tf yf(y)dy∫ bf tf f(y)dy , σy 2 = ∫ bf tf (y − µy)2f(y)dy∫ bf tf f(y)dy , (3) where [lf , tf , rf , bf ], [lv, tv, rv, bv] denote the full-body box and visible box respectively, and f(x) is defined as the weight of each pixel based on the visibility: f(x) = { Wv, lv ≤ x ≤ rv Wf , others , f(y) = { Wv, tv ≤ y ≤ bv Wf , others , (4) where Wf = 0.04,Wv = 1 in our experiments and the size of visible box can be approximated as wv = ρσx, hv = ρσy (ρ = √ 12). Finally, we can calculate the parameters α, β according to the normalized mean and variance, while λ (set to ρ/4) is a constant to keep α, β > 1: µx = µx − l r − l , µy = µy − t b− t , σx = λ · σx r − l , σy = λ · σy b− t , νx = αx + βx = µx(1 + µx) σ2x − 1, νy = αy + βy = µy(1 + µy) σ2y − 1, αx = µxνx = µx( µx(1 + µx) σ2x − 1), αy = µyνy = µy( µy(1 + µy) σ2y − 1), βx = (1− µx)νx, βy = (1− µy)νy. (5) Generally speaking, for each pedestrian, Beta Representation is parameterized by eight parameters, i.e., [l, t, r, b, αx, βx, αy, βy], where[l, t, r, b] are the boundaries indicating the location on the image, and [αx, βx, αy, βy] are the shape parameters of the 2D beta distribution describing the visibility of pedestrians. The probability density function of the 2D beta distribution over the whole image is formulated as follows: P (x, y) = { C ·Be(x̄; αx, βx) ·Be(ȳ; αy, βy), l ≤ x ≤ r, t ≤ y ≤ b, 0, others, (6) where x̄ = (x− l)/(r − l), ȳ = (y − t)/(b− t), and C is a normalization factor to keep the sum of PDF to 1. For pixels inside the beta boundary, the probability values are consistent with the product of two one-dimensional beta distribution, otherwise the probability values are set to zeros. Backbone RPN (Beta Head) Class Box RoI Pooling Beta Head Class Beta Beta Mask RoI Pooling Beta Head Class Beta 3.1.3 Advantages Our proposed Beta Representation shows several impressive advantages. Firstly, it is more precise in terms of the shape and visibility of pedestrians compared with box representation. While the bounding box models the object as a uniform distribution inside the box, 2D beta distribution concentrates more on the center of visual mass. Secondly, compared with the paired boxes, i.e., full-body box along with visible box, 2D beta distribution treats the pedestrian more like an integrated whole rather than two individual parts. Last, it can handle a few problematic situations such as identifying highly-occluded and highly-overlapped objects, which will be discussed in detail. Moreover, it is worth mentioning that pixel-wise annotations in segmentation can also be transformed to the parameterized Beta Representation based on the above equations. 3.2 Beta R-CNN To better implement the Beta Representation, we introduce a new detector named Beta R-CNN inspired by Faster R-CNN [4] and Cascade R-CNN [28]. The architecture is shown in Fig. 3. BetaHead and BetaMask are two core modules in Beta R-CNN. In the following section, we will discuss them respectively. 3.2.1 BetaHead Since we adopt Beta Representation to describe a pedestrian, BetaHead is designed to regress the eight beta parameters, i.e., [l, t, r, b, αx, βx, αy, βy], which is analogous to the regression head in vanilla Faster R-CNN. Specifically, as α, β are too abstractive to learn, we adopt the mean and variance as regression targets, i.e., [l, t, r, b, µx, µy, σx, σy]. The four boundary parameters, i.e., [l, t, r, b], utilize the same normalization strategy introduced in [4]. And for the other four shape parameters, i.e., [µx, µy, σx, σy], we adopt the normalization as follows: tµx = (µx − xa)/wa, tµy = (µy − ya)/ha, tσx = log(σx/wa), tσy = log(σy/ha), t∗µx = (µ ∗ x − xa)/wa, t∗µy = (µ ∗ y − ya)/ha, t∗σx = log(σ ∗ x/wa), t ∗ σy = log(σ ∗ y/ha), (7) where x, y, w, h denote the center coordinates and size of the boundary; µx, σx, µy, σy denote the mean and variance of the object; µ and µ∗ stand for the predicted and ground-truth beta respectively, while subscript a denotes the anchor box. SmoothL1 loss is adopted to optimize the BetaHead. 3.2.2 BetaMask BetaMask is another novel module introduced in Beta R-CNN. Most pedestrian detectors treat the whole extracted features of a person equally important, which will result in poor performance for high-occluded scenes due to the obvious noise. As we introduced in Sec. 3.1, Beta Representation itself has different focuses to picture a person, which emphasizes the visible part in occluded scenes. It is very intuitive to adopt attention mechanism with 2D beta distribution to highlight the features of visible parts and suppress other noise simultaneously, which could induce the network to pay more attention to the discriminative features and achieve better localization accuracy and higher confidence. Different from the common attention mechanism, our proposed BetaMask is based on 2D beta distribution, which is more targeted. In this paper, we directly generate the mask based on prediction results of the previous BetaHead instead of a CNN module like [16], as the beta mask is more like a parameterized probability distribution and it is difficult to keep the consistency of the distribution with convolutional kernels. Referring to equation (5), we get [αx, βx, αy, βy] from the predicted [l, t, r, b, µx, µy, σx, σy], and the mask values are sampled from the 2D beta distribution Be(x, y;αx, βx, αy, βy) = C · Be(x;αx, βx) · Be(y;αy, βy). Then we utilize the element-wise product to modulate the pooled feature with sampled beta masks. Finally, we use KL divergence as the loss function to supervise the BetaMask module: Lmask = ΣBe ∗(x, y)(logBe∗(x, y)− logBe(x, y)), (8) where Be∗(x, y) refers to the distribution generated from the ground truth, while Be(x, y) is generated from the predicted beta parameters. 3.3 BetaNMS When it comes to NMS, instead of taking IoU as the metric to measure the difference between detected objects, we follow [27] to utilize KL divergence as an alternative, but based on 2D beta distribution rather than bivariate normal distribution in [27]. KL divergence is defined as follows: DKL(p||q) = ∑ x,y p(x, y)(log(p(x, y))− log(q(x, y))), (9) where p and q refer to two parameterized distributions. In practice, to keep the symmetry of the distance metric, we adopt the symmetrified KL divergence D̄KL(p||q) as: D̄KL(p||q) = (DKL(p||q) +DKL(q||p))/2. (10) Figure. 4 shows significant differences between symmetrized KL divergence metric and IoU metric on the CrowdHuman validation set. Each dot stands for a pair of two overlapped (fIoU > 0) pedestrians in the same scene, while there are 206088 dots in each graph. When we adopt KL divergence and IoU to perform non-maximum suppression between the above paired boxes respectively, we find only 2844 failed cases based on KL divergence while there are more than 10000 failed cases based on IoU neither fIoU nor vIoU. The comparisons actively demonstrate the superiority of our proposed Beta Representation and the BetaNMS strategy. More details will be shown in experiments. 4 Experiment 4.1 Datasets CityPersons Dataset. The CityPersons dataset [2] is a subset of Cityscapes which only consists of person annotations. There are 2975 images for training, 500 and 1575 images for validation and testing. The average number of pedestrians in an image is 7. We evaluate our proposed method under the full-body setting, following the evaluation protocol in [2], and the partition of validation set follows the standard setting in [19] on account of visibility: Heavy [0, 0.65], Partial [0.65, 0.9], Bare [0.9, 1], Reasonable [0.65, 1]. CrowdHuman Dataset. The CrowdHuman dataset [1], has been recently released to specifically target the crowd issue in the human detection task. There are 15000, 4370, and 5000 images in the training, validation, and testing set respectively. The average number of persons in an image is 22.6, which is much more crowded than other pedestrian datasets. All the experiments are trained on the CrowdHuman training set and evaluated on the validation set. Evaluation Metric. AP (Averaged Precision), which is the most popular metric for detection. AP reflects both the precision and recall ratios of the detection results. Larger AP indicates better performance. MR−2, which is short for log-average Miss Rate on False Positive Per Image (FPPI) in [29], is commonly used in pedestrian detection. Smaller MR−2 indicates better performance. MR−2 emphasizes FP and FN more than AP, which are critical in pedestrian detection. 4.2 Implementation Details In this paper, we adopt Feature Pyramid Network (FPN) [30] with ResNet-50 [31] as the backbone for all the experiments. The two-stage Cascade R-CNN [28] is taken as our baseline detection framework to perform coarse-to-fine mechanism for more accurate beta prediction. As for anchor settings, we follow the same anchor scales in [30], while the aspect ratios are set to H : W = {1 : 1, 2 : 1, 3 : 1}. For training, the batch size is 16, split to 8 GPUs. Each training round includes 16000 iterations on CityPersons and 40000 iterations on CrowdHuman. The learning rate is initialized to 0.02 and divided by 10 at half and three-quarter of total iterations respectively. During training, the sampling ratio of positive to negative proposals for RoI branch is 1 : 1 for CrowdHuman and 1 : 4 for CityPersons. On CityPersons, the input size for both training and testing is 1024× 2048. On CrowdHuman, the short edge of each image is resized to 800 pixels for both training and testing. It is worth mentioning that the proposed components like BetaHead in Beta R-CNN are all optimization-friendly, thus there is no essential difference between Beta R-CNN and Faster R-CNN [4] or Cascade R-CNN [28] for model training and testing. 4.3 Ablation Study on CrowdHuman Ablation study and main results. Table 1 shows the ablation experiments of the proposed Beta R-CNN in Sec. 3, including BetaHead, BetaMask, Mask Loss, and BetaNMS. The baseline is a two-stage Cascade R-CNN with default settings introduced in Sec. 4.2. As we claimed in Sec. 3, it is clear that our method consistently improves the performance in all criteria. BetaHead and BetaMask are proposed to implement Beta Representation and alleviate the occluded issue with new regression targets and attention mechanism, which surely reduce the MR−2 from 43.8% to 41.3% and improve AP from 85.2% to 87.1%. And the Mask Loss, i.e., equation 8, helps model get a more accurate mask. Moreover, the improvement of BetaNMS well demonstrates the superiority over the IoU-based NMS. We further analyze the role of each module. Beta Representation could picture more details of the shape and visibility of pedestrians especially in occluded and crowded scenes, and BetaMask adopts attention mechanism by utilizing 2D beta distribution to modulate more discriminative features, which enhances Beta R-CNN further. At last, BetaNMS eliminates the inherent drawback of IoU-based NMS when it meets highly-overlapped instances under crowded scenes. More details can be found in Sec. 3. Comparison with various NMS strategies. To powerfully illustrate the effectiveness of the BetaNMS, we compare BetaNMS with IoU-based NMS on full-body/visible boxes (visible boxes are approximately transformed from Beta Representation). Results are shown in Table 2 and all reported experiments here are based on Beta R-CNN. BetaNMS outperforms all other NMS methods with a large margin. Compared with fIoU-, vIoU-based NMS tends to recall more overlapped instances but bring in more false positives meanwhile, reflecting in the higher MR−2 and AP. In addition, although we integrate fIoU and vIoU in NMS, we can find BetaNMS still outperforms by at least 0.4% on MR−2 and 1.5% on AP, which means BetaNMS surely better distinguishes highly overlapped instances than IoU-based NMS, whether it is based on the full-body box or visible box or both. Speed/accuracy trade off. Each proposed module in Beta R-CNN is light-weight with little computation cost. We take CrowdHuman validation set with 800x1400 input size to conduct speed experiments on NVIDIA 2080Ti GPU with 8 GPUs, and the average speeds are 0.483s/image ( Cascade R-CNN baseline) and 0.487s/image (Beta R-CNN) respectively. The difference can be negligible. 4.4 State-of-the-art (SOTA) Comparison on CrowdHuman Comparisons with some recent methods on the CrowdHuman validation set are shown in Table 3. It clearly shows that our Beta R-CNN outperforms others with a large margin, especially on the metric MR−2. Such a large gap demonstrates the superiority of our Beta R-CNN. It is worth noting that CrowdDet [32] achieves a little higher AP than ours, which attributes to its motivation, i.e., laying emphasis on larger recall at the expense of more false positives, reflecting in higher MR−2 than ours. 4.5 Experiments on CityPersons To further verify the generalization ability of our method, we also conduct experiments on CityPersons. Table 4 compares Beta R-CNN with some state-of-the-art methods. For a fair comparison, we only list those methods that follow the standard settings, i.e., adopting subset partition criterion in [19] and feeding images with original size as inputs when performing evaluation. Because of the space limit, we will report the results with 1.3x enlarged input images in our supplementary materials. From the table, we can see that our Beta R-CNN outperforms all published methods on all four subsets, especially with a large margin on the Heavy subset, which verifies that our method is effective in occluded and crowded scenes. 5 Conclusion In this paper, we propose a statistic representation for occluded pedestrians based on 2D beta distributions, which takes the paired boxes as an integrated whole and emphasize the object center of visual mass. Besides, Beta R-CNN, equipped with BetaHead and BetaMask, aims to alleviate the pedestrian detection in occluded and crowded scenes. BetaNMS could effectively distinguish highly-overlapped instances based on Beta Representation and KL divergence. The quantitative and qualitative experiments powerfully demonstrate the superiority of our methods. Beta Representation, as well as BetaHead, BetaMask, BetaNMS are all flexible enough to be integrated into other two-stage or single-shot detectors and are also compatible with existing optimization methods to further boost their performance. Moreover, our method could be extended to more general scenes and other detection tasks. Acknowledgements This work was supported in part by the National Key Research and Development Program of China under Grant 2016QY02D0304 and the National Natural Science Foundation of China under Grant 60572002. Broader Impact Our contributions focus on the novel representation and pipeline for pedestrian detection, which can be extended to other computer vision tasks. Also, it may provide new ideas for follow-up research. It therefore has the potential to advance both the beneficial and harmful applications of object detectors, such as autonomous vehicles, intelligent video surveillance, robotics and so on. As for ethical aspects and future societal consequences, this technology can bring harmful or beneficial effects to the society, which depends on the citizens who have evil or pure motivation and who can make good use of this technological progress.
1. What is the focus and contribution of the paper on pedestrian detection? 2. What are the strengths of the proposed approach, particularly in its representation and performance? 3. What are the weaknesses of the paper regarding its requirements, network structure, evaluation metrics, and practicality? 4. Do you have any concerns or suggestions regarding the applicability and integration of the proposed method with other detectors? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper concerns pedestrian detection in occluded and crowded scenes. They observe that conventional representation of bounding box (x1,x2; y1,y2) is limited and poor approximation of location and size. To this end, beta representation is proposed against the conventional bounding box representation. With this they propose Beta RCNN and BetaNMS. Results are positive. Strengths There are several things to like about this paper: 1. Pedestrian detection in crowded/occluded scenes are very challenging. Seeing progress towards this direction is essential. 2. The way the authors formulate new representation is interesting. 3. The state-of-the-art performance. Weaknesses I list below the unclear parts and weaknesses: (1) Requirement: Does the proposed approach require two kinds of annotations to generate beta representation, one full body, another for visible body? (2) Network structure: This could be explained better. Why the authors design in this way? Why are there two BH heads, but only one BM head? (3) Evaluation: Is there a connection between MR^{-2} and AP. In my understanding, there is a strong correlation, that is, if MR-2 decreases AP should improve. Which metric is more suitable for pedestrian detection? (4) Practical: I understand that the results of this paper is the state-of-the-art. However, how practical is the proposed approach: a) Is training easier than Faster RCNN? b) In Table 2, to my eyes performances are very similar, why should one choose BetaNMS? c) If BetaNMS is integrated into EMD [32], will the performance of that improve further? c) Can the proposed method be integrated to a single-shot detector? d) Is beta-representation applicable to just standard pedestrian detection eg. Caltech pedestrian dataset? It would be great if the authors consider this.
NIPS
Title An Algorithm to Learn Polytree Networks with Hidden Nodes Abstract Ancestral graphs are a prevalent mathematical tool to take into account latent (hidden) variables in a probabilistic graphical model. In ancestral graph representations, the nodes are only the observed (manifest) variables and the notion of m-separation fully characterizes the conditional independence relations among such variables, bypassing the need to explicitly consider latent variables. However, ancestral graph models do not necessarily represent the actual causal structure of the model, and do not contain information about, for example, the precise number and location of the hidden variables. Being able to detect the presence of latent variables while also inferring their precise location within the actual causal structure model is a more challenging task that provides more information about the actual causal relationships among all the model variables, including the latent ones. In this article, we develop an algorithm to exactly recover graphical models of random variables with underlying polytree structures when the latent nodes satisfy specific degree conditions. Therefore, this article proposes an approach for the full identification of hidden variables in a polytree. We also show that the algorithm is complete in the sense that when such degree conditions are not met, there exists another polytree with fewer number of latent nodes satisfying the degree conditions and entailing the same independence relations among the observed variables, making it indistinguishable from the actual polytree. 1 Introduction The presence of unmeasured variables is a fundamental challenge in discovery of causal relationships [1, 2, 3]. When the causal diagram is a Directed Acyclic Graph (DAG) with unmeasured variables, a common approach is to use ancestral graphs to describe the independence relations among the measured variables [2]. The main advantage of ancestral graphs is that they involve only the measured variables and successfully encode all their conditional independence relations via m-separation. Furthermore, complete algorithms have been devised to obtain ancestral graphs from observational data, e.g., the work in [3]. However, recovering the actual structure of the original DAG is something that ancestral graphs somehow circumvent. For example, it might be known that the actual causal diagram has a polytree structure including the hidden nodes, but the ancestral graph associated with the measured variables might not even be a polytree [4]. Instead, the recovery of causal diagrams including the location of their hidden variables is a very challenging task and algorithmic solutions are available only for specific scenarios [5, 6, 7, 8]. For example, in the case of specific distributions (i.e., Gaussian and Binomial) when the causal diagram is known to be a rooted tree, the problem has been solved by exploiting the additivity of a metric along the paths of the tree [6, 7, 8, 9]. In the case of generic distributions, though, additive metrics might be too difficult to define or cannot be defined in general. Furthermore, rooted trees can be considered a rather limiting class of networks 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. since they represent probability distributions which can only be factorized according to second order conditional distributions [10]. This article makes a novel contribution towards the recovery of more general causal diagrams. Indeed, it provides an algorithm to learn causal diagrams making no assumptions on the underlying probability distribution, and considering polytree structures which can represent factorizations involving conditional distributions of arbitrarily high order. Furthermore, it is shown that a causal diagram with a polytree structure can be exactly recovered if and only if each hidden node satisfies the following conditions: (i) the node has at least two children; (ii) if the node has exactly one parent, such a parent is not hidden; (iii) the node has at least degree 3, or each of its two children has at least another parent. The provided algorithm recovers every polytree structure with hidden nodes satisfying these conditions, and, remarkably, makes use only of third order statistics. If the degree conditions are not satisfied, then it is shown that there exists another polytree with fewer number of hidden random variables which entails the same independence relations among the observed variables. Indeed, in this case, when no additional information/observations are provided, no test can be constructed to determine the true structure. Another main advantage of this proposed approach lies in the fact that it follows a form of Occam’s razor principle since in the case where the degree conditions on the hidden nodes are not met, then a polytree with minimal number of hidden nodes is selected. We find this property quite relevant in application scenarios since Occam’s razor is arguably one of the cardinal principles in all sciences. 2 Preliminaries, Assumptions and Problem Definition In order to formulate our problem, we first introduce a generalization of the notions of directed and undirected graphs (see for example [11, 12]) which also considers a partition of the set of nodes into visible and hidden nodes. Definition 1 (Latent partially directed graph). A latent partially directed graph Ḡ` is a 4-ple (V, L, E, ~E) where • the disjoint sets V and L are named the set of visible nodes and the set of hidden nodes, • the set E is the set of undirected edges containing unordered pairs of (V ∪ L) × (V ∪ L), • the set ~E is the set of directed edges containing ordered pairs of (V ∪ L) × (V ∪ L). We denote the unordered pair of two elements yi, y j ∈ V ∪ L as yi − y j, and the ordered pair of yi, y j (when yi precedes y j) as yi → y j. In a latent partially directed graph the sets E and ~E do not share any edges. Namely, yi − y j ∈ E implies that both yi → y j and y j → yi are not in ~E. A latent partially directed graph is a fully undirected graph when ~E = ∅, and we simplify the notation by writing G` = (V, L, E). Similarly, when E = ∅, we have a fully directed graph, and we denote it by ~G` = (V, L, ~E). Furthermore, if we drop the distinction between visible and hidden nodes and consider V ∪ L as the set of nodes, we recover the standard notions of undirected and directed graphs. Thus, latent partially directed graphs inherit, in a natural way, all notions associated with standard graphs (e.g., path, degree, neighbor, etc., see for example [11]). In the scope of this article, we denote degree, outdegree, indegree, children, parents, descendants and ancestors of a node y in graph ~G using deg~G (y), deg + ~G (y), deg−~G (y), ch~G (y), pa~G (y), de~G (y) and an~G (y), respectively (see [11, 12] for precise definitions). Furthermore, the notion of restriction of a graph to a subset of nodes follows immediately. Definition 2 (Restriction of a latent partially directed graph). The restriction of a latent partially directed graph Ḡ` = (V, L, E, ~E) with respect to a set of nodes A ⊆ V ∪ L is the latent partially directed graph obtained by considering only the nodes in A and the edges linking pairs of nodes which are both in A. Moreover, a latent partially directed graph is called a latent partially directed tree when there exists exactly one path connecting any pair of nodes. Definition 3 (Latent partially directed tree). A latent partially directed tree ~P` is a latent partially directed graph Ḡ` = (V, L, E, ~E) where every pair of nodes yi, y j ∈ V ∪ L is connected by exactly one path. Trivially, latent partially directed trees generalize the notions of undirected trees and polytrees (directed trees) [13]. In a latent partially directed tree, we define a hidden cluster as a group of hidden nodes that are connected to each other via a path constituted exclusively of hidden nodes. Definition 4 (Hidden cluster). A hidden cluster in a latent partially directed tree ~P` = (V, L, E, ~E) is a set C ⊆ L such that for each distinct pair of nodes yi, y j ∈ C the unique path connecting them contains only nodes in C and no node in C is linked to a node which is in L \C. Observe that each node in a hidden cluster has neighbors which are either visible or hidden nodes of the same cluster. Figure 1 (a) depicts a latent directed tree (or a latent polytree) and its hidden clusters C1 and C2 highlighted by the dotted lines. 14 Furthermore, we introduce the set of (visible) neighbors of a hidden cluster, its closure and its degree. Definition 5 (Neighbors, closure, and degree of a hidden cluster). In a latent partially directed tree, the set of all visible nodes linked to any of the nodes of a hidden cluster C is the set of neighbors of C and is denoted by N(C). We define the degree of the hidden cluster as |N(C)|, namely, the number of neighbors of the cluster. We refer to the restriction of a latent polytree to a hidden cluster and its neighbors as the closure of the hidden cluster. Observe that the neighbors of C1 are shaded with orange color in Figure 1 (a). We also remind the notion of a root node and define the notion of a root of a hidden cluster. Definition 6 (Root of a latent polytree, and root of a hidden cluster in a latent polytree). In a latent polytree ~P` = (V, L, ~E), a root is a node yr ∈ V ∪ L with indegree equal to zero. Also, we define any root of the restriction of the polytree to one of its hidden clusters as the root of the hidden cluster. For example, in Figure 1 (a), node y1 is a root of the latent polytree and node yh3 is a root of the hidden cluster C1. In this article, we make extensive use of the restriction of a polytree to the descendants of one of its roots. We define such a restriction as the rooted subtree of the polytree associated with that root. Additionally, given a latent partially directed tree, we define its collapsed representation by replacing each hidden cluster with a single hidden node. The formal definition is as follows and Figure 1 (b) depicts the collapsed representation of the latent polytree of Figure 1 (a). Definition 7 (Collapsed representation). We define the collapsed representation of ~P` = (V, L, E, ~E) as the latent partially directed tree ~Pc = (V, Lc, Ec, ~Ec) where nc is the number of hidden clusters C1, ...,Cnc , and Lc := C1 ∪ ... ∪Cnc , and Ec := {yi − y j ∈ E | yi, y j ∈ V} ∪ {yi −Ck | ∃y j ∈ Ck, yi − y j ∈ E} ∪ {Ck − y j | ∃yi ∈ Ck, yi − y j ∈ E} ~Ec := {yi → y j ∈ ~E | yi, y j ∈ V} ∪ {yi → Ck | ∃y j ∈ Ck, yi → y j ∈ ~E} ∪ {Ck → y j | ∃yi ∈ Ck, yi → y j ∈ ~E}. In this article, we show the cases where graphical models with polytree structures can be recovered from the independence relations involving only visible nodes. Specifically, we assume that a polytree is a perfect map (see [14, 12]) for a probabilistic model defined over the variables V ∪ L where V and L are disjoint sets. We find conditions under which it is possible to recover information about the perfect map of the probabilistic model considering only independence relations of the form I(yi, ∅, y j) (read yi and y j are independent) and I(yi, yk, y j) (read yi and y j are conditionally independent given yk) for all nodes yi, y j, yk ∈ V . One of the fundamental requirements of solving this problem is that all hidden nodes need to satisfy certain degree conditions summarized in the following definition. Definition 8 (Minimal latent polytree). A latent polytree ~P` = (V, L, ~E) is minimal if every hidden node yh ∈ L satisfies one of the following conditions: • deg+~P` (yh) ≥ 2 and deg~P` (yh) ≥ 3 and if |pa~P` (yh) | = 1, then pa~P` (yh) ⊆ V; • deg+~P` (yh) = 2 and deg − ~P` (yh) = 0 and deg−~P` ( yc1 ) , deg−~P` ( yc2 ) ≥ 2 where ch~P` (yh) = {yc1 , yc2 }. Note that the nodes yh2 , yh4 , yh5 , yh7 in Figure 1 (a) do not satisfy the minimality conditions and therefore the hidden polytree is not minimal. Instead, Figure 1 (c) shows a minimal latent polytree. The algorithm we propose to recover the structure of a latent polytree can be decomposed in several tasks and the hidden nodes which are roots with outdegree equal to 2 and at least one visible child require to be dealt with in a special way in the last task of the algorithm. Therefore, we define the following two types of hidden nodes to make this distinction. Definition 9 (Type-I and type-II hidden nodes). In a minimal latent polytree, we classify a hidden node yh as type-II when deg~G (yh) = 2 with at least one visible child. All other hidden nodes are classified as type-I. In the minimal latent polytree of Figure 2 (a), the hidden nodes yh2 and yh3 are type-II hidden nodes, while all the other hidden nodes are type-I. We define the quasi-skeleton of a minimal latent polytree to deal with type-II hidden nodes separately. Definition 10 (Quasi-skeleton of a latent polytree). In a minimal latent polytree ~P` = (V, L, ~E), the quasi-skeleton of ~P` is the undirected graph obtained by removing the orientation of all edges in ~P`, and removing all the type-II hidden nodes and then linking its two children together. In Figure 2 (b), we have the quasi-skeleton of the polytree of Figure 2 (a). Observe that we can easily define the collapsed representation of a quasi-skeleton of a latent polytree by finding the quasi-skeleton first and then finding its collapsed representation as in Figure 2 (c). As it is well known in the theory of graphical models, in the general case, from a set of conditional independence statements (formally, a semi-graphoid) faithful to a Directed Acyclic Graph (DAG), it is not possible to recover the full DAG [15, 1]. What can be recovered for sure is the pattern of the DAG, namely the skeleton and the v-structures (i.e., yi → yk ← y j) of the DAG [15, 1]. In this article, we show that, similarly, in the case of a minimal latent polytree, we are able to recover the pattern of the polytree from the independence statements involving only the visible variables. Definition 11 (Pattern of a polytree). Let ~P = (N, ~E) be a polytree. The pattern of ~P is a partially directed graph where the orientation of all the v-structures (i.e., yi → yk ← y j) are known and as many as the remaining undirected edges are oriented in such a way that the other alternative orientation would result in a v-structure. Now we have all the necessary tools to formulate the problem. Problem Formulation. Assume a semi-graphoid defined over a set of variables V∪L. Let the latent polytree ~P` = (V, L, ~E) be faithful to the semi-graphoid and assume that the nodes in L satisfy the minimality conditions. Recover the pattern of ~P` from conditional independence relations involving only nodes in V . Remark 12. The proposed solution makes only use of the conditional independence relations of the form I(yi, ∅, y j) and I(yi, yk, y j) for all yi, y j, yk ∈ V. 3 An Algorithm to Reconstruct Minimal Hidden Polytrees Our algorithm for learning the pattern of a minimal latent polytree is made of the following 5 tasks: 1. Using the independence statements involving the visible nodes, determine the number of rooted subtrees in the latent polytree and their respective sets of visible nodes; 2. Given all the visible nodes belonging to each rooted subtree, determine the collapsed quasiskeleton of each rooted subtree; 3. Merge the overlapping hidden clusters in the collapsed quasi-skeleton of each rooted subtree to obtain the collapsed quasi-skeleton of the latent polytree; 4. Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes); 5. Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations). Figure 3 shows the stage of the recovery of the polytree structure at the end of each task. The following subsections provide more details about each task, but the most technical results are in the Supplemental Material. We stress that the first two tasks mostly leverage previous work about rooted trees and the main novelty of this article lies in tasks 3, 4 and 5. 3.1 Task 1: Determine the visible nodes of each rooted subtree This first task can be performed by the Pairwise-Finite Distance Algorithm (PFDA), presented in [16] and reported in the Supplementary Material as Algorithm 4. As shown in [16], PFDA takes as input the set of visible nodes of a latent polytree and outputs sets of visible nodes with the property that each set corresponds to the visible descendants of a root of the latent polytree, when the polytree is minimal. In the following theorem, we show that the output of PFDA applied to the independence statements is the same as described above. See Supplementary Material for the proof of this theorem. Theorem 13. Consider a latent polytree ~P` = (V, L, ~E) faithful to a probabilistic model. Assume that the hidden nodes in L satisfy the minimality conditions. Then PFDA, applied to the independence statements of the probabilistic model with the form I(yi, ∅, y j) for all yi, y j ∈ V, outputs a collection of sets, such that each of them is given by all the visible descendants of a root of ~P`. 3.2 Task 2: Determine the collapsed quasi-skeleton of each rooted subtree The second task is performed by the Reconstruction Algorithm for Latent Rooted Trees in [17]. We report it as Algorithm 5 in the Supplementary Material for completeness. The input of this algorithm is the set Vr of the visible nodes belonging to a rooted subtree Tr and independence relations of the form I(yi, yk, y j) or ¬I(yi, yk, y j) for distinct yi, y j, yk ∈ Vr. Its output is the collapsed quasi-skeleton of Tr. Thus, we can call this algorithm on all of the sets of visible nodes V1, ...,Vnr where nr is the number of roots, obtained from Task 1, and find the collapsed quasi-skeletons of all the rooted subtrees of the latent polytree. This result is formalized in the following theorem. See Supplementary Material for the proof of this theorem. Theorem 14. Let ~P` = (V, L, ~E) be a minimal latent polytree. Consider a root yr of ~P` and let Vr = V ∩ de~P` (yr). The output of Reconstruction Algorithm for Latent Rooted Trees applied to Vr is the collapsed quasi-skeleton of the rooted subtree with root node yr. 3.3 Task 3: Merge the overlapping hidden clusters of the collapsed rooted trees By applying the Reconstruction Algorithm for Latent Rooted Trees on each set of visible nodes in the same rooted tree, we have, as an output, the collapsed quasi-skeletons of all rooted subtrees in the original hidden polytree. In the general case, some hidden clusters in the collapsed quasi-skeleton of the rooted subtrees might overlap, namely, they might share some hidden nodes in the original hidden polytree. The following theorem provides a test on the sets of visible nodes of the rooted subtrees in a minimal latent polytree to determine if two hidden clusters in two distinct collapsed quasi-skeletons of two rooted subtrees belong to the same cluster in the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 15. Consider a minimal latent polytree ~P`. Let C1 and C2 be two distinct hidden clusters in the collapsed quasi-skeletons of two rooted subtrees of ~P`. If the set of neighbors of C1 and the set of neighbors of C2 share at least a pair of visible nodes, i.e., |N(C1) ∩ N(C2)| ≥ 2, then the nodes in C1 and C2 belong to the same hidden cluster in the collapsed quasi-skeleton of ~P`. This theorem is the enabling result for the Hidden Cluster Merging Algorithm (HCMA), presented in Algorithm 1, which merges all the collapsed quasi-skeletons associated with the individual rooted subtrees, obtained from Task 2, into the collapsed quasi-skeleton of the polytree. This algorithm starts with the collapsed quasi-skeleton of the rooted subtrees, then finds pairs of clusters that overlap by testing if they share at least one pair of visible neighbors (see Theorem 15), and then merges the overlapping pairs. This procedure is repeated until no clusters are merged anymore. Algorithm 1 Hidden Cluster Merging Algorithm Input the collapsed quasi-skeleton of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr Output the collapsed quasi-skeleton P of the latent polytree 1: Initialize the set of clusters P with the hidden clusters of all Ti, i.e., P := {{C1}, {C2}, ..., {Ck}} 2: while there are two elements Ci,C j ∈ P such that |N(Ci) ∩ N(C j)| ≥ 2 do 3: remove Ci,C j from P and add Ci ∪C j to P 4: define N(Ci ∪C j) := N(Ci) ∪ N(C j) 5: end while 6: Define the polytree P = (∪iVi,P, E) where E := {{ya, yb} | ∃ i : ya, yb ∈ Vi, ya − yb ∈ Ei} ∪ {{ya,Cb} | ∃ i, h : ya ∈ Vi, yh ∈ Li, Li ⊆ Cb,Cb ∈ P, ya − yh ∈ Ei} The following theorem guarantees that, for a minimal latent polytree, the output of HCMA is the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 16. Let ~P` = (V, L, ~E) be a minimal latent polytree and let Ti = (Vi, Li, Ei) for i = 1, ..., nr be the collapsed quasi-skeletons of the rooted subtrees of ~P`. Then HCMA outputs the collapsed quasi-skeleton of ~P`. 3.4 Task 4: Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes) After performing the HCMA, the output is the collapsed quasi-skeleton of the latent polytree, thus, the structure of the hidden nodes within each hidden cluster is not known yet. Note that the restriction of the original polytree to the closure of a hidden cluster is a smaller polytree. The goal of this task is to recover the structure of the hidden clusters by focusing on each individual closure (i.e., recover Type-I hidden nodes and their connectivities). Given the closure of a hidden cluster, the basic strategy is to detect one root of the hidden cluster along with the visible nodes (if any) linked to this root. Then, we label such a root as a visible node, add edges between this node and its visible neighbors, and subsequently apply the same strategy recursively to the descendants of such a detected root. Since we focus on the closure of a specific hidden cluster, say C, we define the following sets Ṽr = Vr ∩ N(C) for r = 1, ..., nr where nr is the number of rooted subtrees in the latent polytree and Vr are the sets of visible nodes in each rooted subtree (obtained from Task 1). A fundamental result for detection of a root of a hidden cluster is the following theorem. See Supplementary Material for the proof of this theorem. Theorem 17. Let ~P` be a minimal latent polytree and let ~T r = (Vr, Lr, ~Er) with r = 1, ..., nr be all the rooted subtrees of ~P`. Let C be a hidden cluster in the collapsed quasi-skeleton of ~P`. Define Ṽr := Vr ∩ N(C) for r = 1, ..., nr where nr is the number of roots in ~P`. Then, Tr contains a hidden root of C if and only if Ṽr , ∅ and for all Ṽr′ with r′ , r we have |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1. To make the application of this theorem more clear, consider the latent polytree introduced in Figure 3 (True). After applying the first three tasks, we obtain the collapsed quasi-skeleton of the latent polytree as depicted in Figure 3 (Task 3). Observe that the rooted subtrees ~T 1 (with root y1) and ~T 2 (with root y2) satisfy the conditions of Theorem 17 indicating that they contain a root of the hidden cluster. The following lemma allows one to find the visible nodes linked to a hidden root in the closure of a hidden cluster. See Supplementary Material for the proof of this lemma. Lemma 18. Let ~P` be a minimal latent polytree. Consider a hidden root yh of a hidden cluster C in the collapsed quasi-skeleton of ~P` where yh belongs to the rooted subtree Tr = (Vr, Lr, ~Er). Define Ṽr′ := Vr′ ∩ N(C) for r′ = 1, ..., nr where nr is the number of roots in ~P`. The visible nodes linked to yh are given by the set W \W where I := {r} ∪ {r′such that |Ṽr \ Ṽr′ | = |Ṽr′ \ Ṽr | = 1}, W := ⋃ i∈I Ṽi, W := ⋃ i<I Ṽi. We follow the example of Figure 3 to show the steps of Task 4 in more details. Without loss of generality, choose Tr = T1. Consider the closure of CA′ obtained at the end of Task 3 and then apply Lemma 18 to obtain I = {1, 2}, W = {y1, y2, y10, y12, y13, y14, y15, y16, y17}, W = {y5, y6, y9, y11, y12, y13, y14, y15, y16, y17}, and thus W \W = {y1, y2, y10}. Therefore, the visible nodes linked to the hidden root in T1 are y1, y2 and y10. Now we introduce the Hidden Cluster Learning Algorithm (HCLA), presented in Algorithm 2, to learn the structure of a hidden cluster. Again, consider the closure of the hidden cluster CA′ as depicted in Figure 4 (Task 4a) which we obtained at the end of Task 3. Then, apply Hidden Node Detection procedure to CA′ and observe that the output at the end of Step 23 of Algorithm 2 is in Figure 4 (Task 4b). The output of the merging in Steps 24-27 is depicted in Figure 4 (Task 4c) and the output of the merging in Step 28 is depicted in Figure 4 (Task 4d). Now, we can apply the same procedure recursively to the remaining hidden clusters to obtain the final output of Task 4, the quasi-skeleton of the polytree, as depicted in Figure 3 (Task 4). Here, we show that the output of HCLA is the quasi-skeleton of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 19. Let ~P` = (V, L, ~E) be a minimal latent polytree. When HCLA is applied to all hidden clusters of the collapsed quasi-skeleton of ~P`, the output P = (V, E) is the quasi-skeleton of ~P`. Furthermore, HCLA also outputs, for each pair yi, y j ∈ V, the relation I(yi, ∅, y j) if and only if the path connecting yi and y j in ~P` contains an inverted fork. Algorithm 2 Hidden Cluster Learning Algorithm Input the collapsed quasi-skeleton of a minimal polytree ~P`, collapsed quasi-skeletons of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr, and the set of the hidden clusters P = {C1, ...,CnC } Output P and the independence relations of the form I(ya, ∅, yb) or ¬I(ya, ∅, yb) for all nodes ya, yb ∈ ⋃ i Vi 1: while P , ∅ do 2: Call Hidden Node Detection Procedure(C1) where C1 is the first element of P 3: end while 4: procedure Hidden Node Detection(C) 5: Compute Ṽi = Vi ∩ N(C) 6: Find Ṽr which satisfies |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1 for all r′ , r (as in Theorem 17) 7: Initialize W := Ṽr, W := ∅, and I := {r} 8: for all i = 1, ..., nr with i , r do 9: if |Ṽr \ Ṽi| = 1 and |Ṽi \ Ṽr | = 1 (as in Lemma 18) then 10: W := W ∪ Ṽi and I := I ∪ {i} 11: else 12: W := W ∪ Ṽi 13: end if 14: end for 15: A new hidden node yh is revealed 16: Add yh to all the rooted trees Ti with i ∈ I, namely Vi := Vi ∪ {yh} 17: Add the independence relation ¬I(yh, ∅, y) for all y ∈ Vi with i ∈ I, and add the independence relation I(yh, ∅, y) for all other nodes y 18: Link all nodes in W \W to yh in all Ti with i ∈ I, namely Ei := Ei ∪ { {yh, y} | y ∈ W \W } 19: for all i ∈ I do 20: create nk = |W ∩W | new clusters: C(i)1 , ...,C (i) nk 21: link yh to C (i) 1 , ...,C (i) nk 22: link each cluster C(i)1 , ...,C (i) nk to a distinct element in W ∩W 23: end for 24: while ∃ya, yb ∈ N(C(i)j ) ∪ N(C (i) k ) such that ya, yb ∈ Ṽm where m < I do 25: merge the two hidden clusters C(i)j and C (i) k 26: update the structure of Ti with the new hidden clusters 27: end while 28: Let P = (V,P, E) be the output of HCMA applied to Ti = (Vi, Li, Ei), for i = 1, ..., nr 29: end procedure 3.5 Task 5: Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations) Once the quasi-skeleton of the latent polytree has been obtained, the only missing nodes to recover the full skeleton are the type-II hidden nodes of the original polytree. Interestingly, the detection of such hidden nodes can be performed concurrently with the recovery of the edge orientations. In particular, we apply Rebane and Pearl’s algorithm in [13] to orient the edges of the quasi-skeleton of the polytree. Then, we have that the edges receiving double orientations imply the presence of a type-II hidden node between the two linked nodes. Thus, the Hidden Root Recovery Algorithm (HRRA), presented in Algorithm 3, is simply an implementation of Rebane and Pearl’s algorithm (Steps 1-4), as depicted in Figure 4 (Task 5a), with the additional detection of type-II hidden nodes (Steps 5-10). As a consequence, we have this final result stated in Theorem 20 to prove that HRRA outputs the pattern of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 20. Let ~P` be a minimal latent polytree. When the input is the quasi-skeleton of ~P` with the independence statements of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all the pairs of nodes yi and y j, the output of HRRA is the pattern of ~P`. For a complete step by step example of this algorithm see the Supplementary Material. Algorithm 3 Hidden Root Recovery Algorithm Input P = (V, E), the quasi-skeleton of a latent polytree, and the independence relations of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all nodes yi, y j ∈ V Output the partially directed polytree P̄ = (V, E, ~E) 1: while additional edges are oriented do 2: if yi − yk, y j − yk ∈ E and I(yi, ∅, y j), then add yi → yk and y j → yk to ~E 3: if yi → yk ∈ ~E, yk − y j ∈ E and ¬I(yi, ∅, y j), then add yk → y j to ~E 4: end while 5: Remove the edges that are oriented in ~E from E 6: for all yi, y j such that yi → y j, y j → yi ∈ ~E do 7: a new hidden node of Type-II is detected which is a parent of yi and y j 8: remove yi → y j, y j → yi from ~E 9: add a new node yh to V 10: add yh → y j, yh → yi to ~E 11: end for 4 Conclusions and Discussion We have provided an algorithm to reconstruct the pattern of a latent polytree graphical model. The algorithm only requires the second and third order statistics of the observed variables and no prior information about the number and location of the hidden nodes is assumed. An important property of the proposed approach is that the algorithm is sound under specific degree conditions on the hidden variables. If such degree conditions are not met, it is shown, in the Supplementary Material, that there exists another latent polytree with fewer number of hidden nodes entailing the same independence relations. In this sense, the proposed algorithm always recover a minimal graphical model in the sense of hidden nodes following a form of Occam’s razor principle. Future work will study how this algorithm performs under limited amount of data and how to deal with situations when the measurements are not exact. Acknowledgments This work has been partially supported by NSF (CNS CAREER #1553504).
1. What is the focus of the paper regarding causal structures with latent variables? 2. What are the strengths and limitations of the proposed method, particularly in its application to polytree causal networks? 3. How does the reviewer assess the clarity and wording of the paper's content, especially regarding definitions and step-by-step processes? 4. What are some specific areas where the paper could benefit from greater clarity or additional examples? 5. How does the reviewer evaluate the significance of the paper's contributions, despite its limited scope on a special case?
Review
Review Learning causal structures with latent variables is a major challenge. This paper takes a shot at one of the simplest cases, polytree causal networks. While this is a limited special case, the ideas and methods may be useful more generally. It is interesting that the method needs only second and third order statistics of the observed variables. The paper would benefit from greater clarity in several areas. The paper defines of a quasi-skeleton and a collapsed representation, but I don't see a definition of a collapsed quasi-skeleton. Step 3 of the process in Section 3 should be worded more carefully. First, as noted above, collapsed quasi-skeletons are not defined. Second, what does it mean for the collapsed quasi-skeletons to "partially overlap"? The collapsed representation replaces each hidden cluster with a single hidden variable; so what does it mean for two single hidden variables to "partially overlap"? An example of partial overlap might be helpful; none of your quasi-skeleton examples have any overlap in their hidden clusters. Your supplemental material gives conditions required for this to happen, but there are no examples. This makes the algorithm hard to understand. The authors have already noted in the supplemental material that Figure 1(b) needed to include orientations on the edges. This confused me because I read the paper prior to the supplemental material. Thanks also for pointing out the errors in HRRA. The abstract could be greatly improved. The first sentence says that an approach is given by a formulation. This is nonsense! An ancestral graph is a mathematical object; what is the "ancestral graph approach"? Near the end of a very difficult to follow abstract, you finally say what the paper actually is about, but don't relate it back to the problems you have been discussing with "the ancestral graph approach" (whatever that is). Update: I thank the authors for their comments, which have satisfied me.
NIPS
Title An Algorithm to Learn Polytree Networks with Hidden Nodes Abstract Ancestral graphs are a prevalent mathematical tool to take into account latent (hidden) variables in a probabilistic graphical model. In ancestral graph representations, the nodes are only the observed (manifest) variables and the notion of m-separation fully characterizes the conditional independence relations among such variables, bypassing the need to explicitly consider latent variables. However, ancestral graph models do not necessarily represent the actual causal structure of the model, and do not contain information about, for example, the precise number and location of the hidden variables. Being able to detect the presence of latent variables while also inferring their precise location within the actual causal structure model is a more challenging task that provides more information about the actual causal relationships among all the model variables, including the latent ones. In this article, we develop an algorithm to exactly recover graphical models of random variables with underlying polytree structures when the latent nodes satisfy specific degree conditions. Therefore, this article proposes an approach for the full identification of hidden variables in a polytree. We also show that the algorithm is complete in the sense that when such degree conditions are not met, there exists another polytree with fewer number of latent nodes satisfying the degree conditions and entailing the same independence relations among the observed variables, making it indistinguishable from the actual polytree. 1 Introduction The presence of unmeasured variables is a fundamental challenge in discovery of causal relationships [1, 2, 3]. When the causal diagram is a Directed Acyclic Graph (DAG) with unmeasured variables, a common approach is to use ancestral graphs to describe the independence relations among the measured variables [2]. The main advantage of ancestral graphs is that they involve only the measured variables and successfully encode all their conditional independence relations via m-separation. Furthermore, complete algorithms have been devised to obtain ancestral graphs from observational data, e.g., the work in [3]. However, recovering the actual structure of the original DAG is something that ancestral graphs somehow circumvent. For example, it might be known that the actual causal diagram has a polytree structure including the hidden nodes, but the ancestral graph associated with the measured variables might not even be a polytree [4]. Instead, the recovery of causal diagrams including the location of their hidden variables is a very challenging task and algorithmic solutions are available only for specific scenarios [5, 6, 7, 8]. For example, in the case of specific distributions (i.e., Gaussian and Binomial) when the causal diagram is known to be a rooted tree, the problem has been solved by exploiting the additivity of a metric along the paths of the tree [6, 7, 8, 9]. In the case of generic distributions, though, additive metrics might be too difficult to define or cannot be defined in general. Furthermore, rooted trees can be considered a rather limiting class of networks 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. since they represent probability distributions which can only be factorized according to second order conditional distributions [10]. This article makes a novel contribution towards the recovery of more general causal diagrams. Indeed, it provides an algorithm to learn causal diagrams making no assumptions on the underlying probability distribution, and considering polytree structures which can represent factorizations involving conditional distributions of arbitrarily high order. Furthermore, it is shown that a causal diagram with a polytree structure can be exactly recovered if and only if each hidden node satisfies the following conditions: (i) the node has at least two children; (ii) if the node has exactly one parent, such a parent is not hidden; (iii) the node has at least degree 3, or each of its two children has at least another parent. The provided algorithm recovers every polytree structure with hidden nodes satisfying these conditions, and, remarkably, makes use only of third order statistics. If the degree conditions are not satisfied, then it is shown that there exists another polytree with fewer number of hidden random variables which entails the same independence relations among the observed variables. Indeed, in this case, when no additional information/observations are provided, no test can be constructed to determine the true structure. Another main advantage of this proposed approach lies in the fact that it follows a form of Occam’s razor principle since in the case where the degree conditions on the hidden nodes are not met, then a polytree with minimal number of hidden nodes is selected. We find this property quite relevant in application scenarios since Occam’s razor is arguably one of the cardinal principles in all sciences. 2 Preliminaries, Assumptions and Problem Definition In order to formulate our problem, we first introduce a generalization of the notions of directed and undirected graphs (see for example [11, 12]) which also considers a partition of the set of nodes into visible and hidden nodes. Definition 1 (Latent partially directed graph). A latent partially directed graph Ḡ` is a 4-ple (V, L, E, ~E) where • the disjoint sets V and L are named the set of visible nodes and the set of hidden nodes, • the set E is the set of undirected edges containing unordered pairs of (V ∪ L) × (V ∪ L), • the set ~E is the set of directed edges containing ordered pairs of (V ∪ L) × (V ∪ L). We denote the unordered pair of two elements yi, y j ∈ V ∪ L as yi − y j, and the ordered pair of yi, y j (when yi precedes y j) as yi → y j. In a latent partially directed graph the sets E and ~E do not share any edges. Namely, yi − y j ∈ E implies that both yi → y j and y j → yi are not in ~E. A latent partially directed graph is a fully undirected graph when ~E = ∅, and we simplify the notation by writing G` = (V, L, E). Similarly, when E = ∅, we have a fully directed graph, and we denote it by ~G` = (V, L, ~E). Furthermore, if we drop the distinction between visible and hidden nodes and consider V ∪ L as the set of nodes, we recover the standard notions of undirected and directed graphs. Thus, latent partially directed graphs inherit, in a natural way, all notions associated with standard graphs (e.g., path, degree, neighbor, etc., see for example [11]). In the scope of this article, we denote degree, outdegree, indegree, children, parents, descendants and ancestors of a node y in graph ~G using deg~G (y), deg + ~G (y), deg−~G (y), ch~G (y), pa~G (y), de~G (y) and an~G (y), respectively (see [11, 12] for precise definitions). Furthermore, the notion of restriction of a graph to a subset of nodes follows immediately. Definition 2 (Restriction of a latent partially directed graph). The restriction of a latent partially directed graph Ḡ` = (V, L, E, ~E) with respect to a set of nodes A ⊆ V ∪ L is the latent partially directed graph obtained by considering only the nodes in A and the edges linking pairs of nodes which are both in A. Moreover, a latent partially directed graph is called a latent partially directed tree when there exists exactly one path connecting any pair of nodes. Definition 3 (Latent partially directed tree). A latent partially directed tree ~P` is a latent partially directed graph Ḡ` = (V, L, E, ~E) where every pair of nodes yi, y j ∈ V ∪ L is connected by exactly one path. Trivially, latent partially directed trees generalize the notions of undirected trees and polytrees (directed trees) [13]. In a latent partially directed tree, we define a hidden cluster as a group of hidden nodes that are connected to each other via a path constituted exclusively of hidden nodes. Definition 4 (Hidden cluster). A hidden cluster in a latent partially directed tree ~P` = (V, L, E, ~E) is a set C ⊆ L such that for each distinct pair of nodes yi, y j ∈ C the unique path connecting them contains only nodes in C and no node in C is linked to a node which is in L \C. Observe that each node in a hidden cluster has neighbors which are either visible or hidden nodes of the same cluster. Figure 1 (a) depicts a latent directed tree (or a latent polytree) and its hidden clusters C1 and C2 highlighted by the dotted lines. 14 Furthermore, we introduce the set of (visible) neighbors of a hidden cluster, its closure and its degree. Definition 5 (Neighbors, closure, and degree of a hidden cluster). In a latent partially directed tree, the set of all visible nodes linked to any of the nodes of a hidden cluster C is the set of neighbors of C and is denoted by N(C). We define the degree of the hidden cluster as |N(C)|, namely, the number of neighbors of the cluster. We refer to the restriction of a latent polytree to a hidden cluster and its neighbors as the closure of the hidden cluster. Observe that the neighbors of C1 are shaded with orange color in Figure 1 (a). We also remind the notion of a root node and define the notion of a root of a hidden cluster. Definition 6 (Root of a latent polytree, and root of a hidden cluster in a latent polytree). In a latent polytree ~P` = (V, L, ~E), a root is a node yr ∈ V ∪ L with indegree equal to zero. Also, we define any root of the restriction of the polytree to one of its hidden clusters as the root of the hidden cluster. For example, in Figure 1 (a), node y1 is a root of the latent polytree and node yh3 is a root of the hidden cluster C1. In this article, we make extensive use of the restriction of a polytree to the descendants of one of its roots. We define such a restriction as the rooted subtree of the polytree associated with that root. Additionally, given a latent partially directed tree, we define its collapsed representation by replacing each hidden cluster with a single hidden node. The formal definition is as follows and Figure 1 (b) depicts the collapsed representation of the latent polytree of Figure 1 (a). Definition 7 (Collapsed representation). We define the collapsed representation of ~P` = (V, L, E, ~E) as the latent partially directed tree ~Pc = (V, Lc, Ec, ~Ec) where nc is the number of hidden clusters C1, ...,Cnc , and Lc := C1 ∪ ... ∪Cnc , and Ec := {yi − y j ∈ E | yi, y j ∈ V} ∪ {yi −Ck | ∃y j ∈ Ck, yi − y j ∈ E} ∪ {Ck − y j | ∃yi ∈ Ck, yi − y j ∈ E} ~Ec := {yi → y j ∈ ~E | yi, y j ∈ V} ∪ {yi → Ck | ∃y j ∈ Ck, yi → y j ∈ ~E} ∪ {Ck → y j | ∃yi ∈ Ck, yi → y j ∈ ~E}. In this article, we show the cases where graphical models with polytree structures can be recovered from the independence relations involving only visible nodes. Specifically, we assume that a polytree is a perfect map (see [14, 12]) for a probabilistic model defined over the variables V ∪ L where V and L are disjoint sets. We find conditions under which it is possible to recover information about the perfect map of the probabilistic model considering only independence relations of the form I(yi, ∅, y j) (read yi and y j are independent) and I(yi, yk, y j) (read yi and y j are conditionally independent given yk) for all nodes yi, y j, yk ∈ V . One of the fundamental requirements of solving this problem is that all hidden nodes need to satisfy certain degree conditions summarized in the following definition. Definition 8 (Minimal latent polytree). A latent polytree ~P` = (V, L, ~E) is minimal if every hidden node yh ∈ L satisfies one of the following conditions: • deg+~P` (yh) ≥ 2 and deg~P` (yh) ≥ 3 and if |pa~P` (yh) | = 1, then pa~P` (yh) ⊆ V; • deg+~P` (yh) = 2 and deg − ~P` (yh) = 0 and deg−~P` ( yc1 ) , deg−~P` ( yc2 ) ≥ 2 where ch~P` (yh) = {yc1 , yc2 }. Note that the nodes yh2 , yh4 , yh5 , yh7 in Figure 1 (a) do not satisfy the minimality conditions and therefore the hidden polytree is not minimal. Instead, Figure 1 (c) shows a minimal latent polytree. The algorithm we propose to recover the structure of a latent polytree can be decomposed in several tasks and the hidden nodes which are roots with outdegree equal to 2 and at least one visible child require to be dealt with in a special way in the last task of the algorithm. Therefore, we define the following two types of hidden nodes to make this distinction. Definition 9 (Type-I and type-II hidden nodes). In a minimal latent polytree, we classify a hidden node yh as type-II when deg~G (yh) = 2 with at least one visible child. All other hidden nodes are classified as type-I. In the minimal latent polytree of Figure 2 (a), the hidden nodes yh2 and yh3 are type-II hidden nodes, while all the other hidden nodes are type-I. We define the quasi-skeleton of a minimal latent polytree to deal with type-II hidden nodes separately. Definition 10 (Quasi-skeleton of a latent polytree). In a minimal latent polytree ~P` = (V, L, ~E), the quasi-skeleton of ~P` is the undirected graph obtained by removing the orientation of all edges in ~P`, and removing all the type-II hidden nodes and then linking its two children together. In Figure 2 (b), we have the quasi-skeleton of the polytree of Figure 2 (a). Observe that we can easily define the collapsed representation of a quasi-skeleton of a latent polytree by finding the quasi-skeleton first and then finding its collapsed representation as in Figure 2 (c). As it is well known in the theory of graphical models, in the general case, from a set of conditional independence statements (formally, a semi-graphoid) faithful to a Directed Acyclic Graph (DAG), it is not possible to recover the full DAG [15, 1]. What can be recovered for sure is the pattern of the DAG, namely the skeleton and the v-structures (i.e., yi → yk ← y j) of the DAG [15, 1]. In this article, we show that, similarly, in the case of a minimal latent polytree, we are able to recover the pattern of the polytree from the independence statements involving only the visible variables. Definition 11 (Pattern of a polytree). Let ~P = (N, ~E) be a polytree. The pattern of ~P is a partially directed graph where the orientation of all the v-structures (i.e., yi → yk ← y j) are known and as many as the remaining undirected edges are oriented in such a way that the other alternative orientation would result in a v-structure. Now we have all the necessary tools to formulate the problem. Problem Formulation. Assume a semi-graphoid defined over a set of variables V∪L. Let the latent polytree ~P` = (V, L, ~E) be faithful to the semi-graphoid and assume that the nodes in L satisfy the minimality conditions. Recover the pattern of ~P` from conditional independence relations involving only nodes in V . Remark 12. The proposed solution makes only use of the conditional independence relations of the form I(yi, ∅, y j) and I(yi, yk, y j) for all yi, y j, yk ∈ V. 3 An Algorithm to Reconstruct Minimal Hidden Polytrees Our algorithm for learning the pattern of a minimal latent polytree is made of the following 5 tasks: 1. Using the independence statements involving the visible nodes, determine the number of rooted subtrees in the latent polytree and their respective sets of visible nodes; 2. Given all the visible nodes belonging to each rooted subtree, determine the collapsed quasiskeleton of each rooted subtree; 3. Merge the overlapping hidden clusters in the collapsed quasi-skeleton of each rooted subtree to obtain the collapsed quasi-skeleton of the latent polytree; 4. Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes); 5. Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations). Figure 3 shows the stage of the recovery of the polytree structure at the end of each task. The following subsections provide more details about each task, but the most technical results are in the Supplemental Material. We stress that the first two tasks mostly leverage previous work about rooted trees and the main novelty of this article lies in tasks 3, 4 and 5. 3.1 Task 1: Determine the visible nodes of each rooted subtree This first task can be performed by the Pairwise-Finite Distance Algorithm (PFDA), presented in [16] and reported in the Supplementary Material as Algorithm 4. As shown in [16], PFDA takes as input the set of visible nodes of a latent polytree and outputs sets of visible nodes with the property that each set corresponds to the visible descendants of a root of the latent polytree, when the polytree is minimal. In the following theorem, we show that the output of PFDA applied to the independence statements is the same as described above. See Supplementary Material for the proof of this theorem. Theorem 13. Consider a latent polytree ~P` = (V, L, ~E) faithful to a probabilistic model. Assume that the hidden nodes in L satisfy the minimality conditions. Then PFDA, applied to the independence statements of the probabilistic model with the form I(yi, ∅, y j) for all yi, y j ∈ V, outputs a collection of sets, such that each of them is given by all the visible descendants of a root of ~P`. 3.2 Task 2: Determine the collapsed quasi-skeleton of each rooted subtree The second task is performed by the Reconstruction Algorithm for Latent Rooted Trees in [17]. We report it as Algorithm 5 in the Supplementary Material for completeness. The input of this algorithm is the set Vr of the visible nodes belonging to a rooted subtree Tr and independence relations of the form I(yi, yk, y j) or ¬I(yi, yk, y j) for distinct yi, y j, yk ∈ Vr. Its output is the collapsed quasi-skeleton of Tr. Thus, we can call this algorithm on all of the sets of visible nodes V1, ...,Vnr where nr is the number of roots, obtained from Task 1, and find the collapsed quasi-skeletons of all the rooted subtrees of the latent polytree. This result is formalized in the following theorem. See Supplementary Material for the proof of this theorem. Theorem 14. Let ~P` = (V, L, ~E) be a minimal latent polytree. Consider a root yr of ~P` and let Vr = V ∩ de~P` (yr). The output of Reconstruction Algorithm for Latent Rooted Trees applied to Vr is the collapsed quasi-skeleton of the rooted subtree with root node yr. 3.3 Task 3: Merge the overlapping hidden clusters of the collapsed rooted trees By applying the Reconstruction Algorithm for Latent Rooted Trees on each set of visible nodes in the same rooted tree, we have, as an output, the collapsed quasi-skeletons of all rooted subtrees in the original hidden polytree. In the general case, some hidden clusters in the collapsed quasi-skeleton of the rooted subtrees might overlap, namely, they might share some hidden nodes in the original hidden polytree. The following theorem provides a test on the sets of visible nodes of the rooted subtrees in a minimal latent polytree to determine if two hidden clusters in two distinct collapsed quasi-skeletons of two rooted subtrees belong to the same cluster in the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 15. Consider a minimal latent polytree ~P`. Let C1 and C2 be two distinct hidden clusters in the collapsed quasi-skeletons of two rooted subtrees of ~P`. If the set of neighbors of C1 and the set of neighbors of C2 share at least a pair of visible nodes, i.e., |N(C1) ∩ N(C2)| ≥ 2, then the nodes in C1 and C2 belong to the same hidden cluster in the collapsed quasi-skeleton of ~P`. This theorem is the enabling result for the Hidden Cluster Merging Algorithm (HCMA), presented in Algorithm 1, which merges all the collapsed quasi-skeletons associated with the individual rooted subtrees, obtained from Task 2, into the collapsed quasi-skeleton of the polytree. This algorithm starts with the collapsed quasi-skeleton of the rooted subtrees, then finds pairs of clusters that overlap by testing if they share at least one pair of visible neighbors (see Theorem 15), and then merges the overlapping pairs. This procedure is repeated until no clusters are merged anymore. Algorithm 1 Hidden Cluster Merging Algorithm Input the collapsed quasi-skeleton of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr Output the collapsed quasi-skeleton P of the latent polytree 1: Initialize the set of clusters P with the hidden clusters of all Ti, i.e., P := {{C1}, {C2}, ..., {Ck}} 2: while there are two elements Ci,C j ∈ P such that |N(Ci) ∩ N(C j)| ≥ 2 do 3: remove Ci,C j from P and add Ci ∪C j to P 4: define N(Ci ∪C j) := N(Ci) ∪ N(C j) 5: end while 6: Define the polytree P = (∪iVi,P, E) where E := {{ya, yb} | ∃ i : ya, yb ∈ Vi, ya − yb ∈ Ei} ∪ {{ya,Cb} | ∃ i, h : ya ∈ Vi, yh ∈ Li, Li ⊆ Cb,Cb ∈ P, ya − yh ∈ Ei} The following theorem guarantees that, for a minimal latent polytree, the output of HCMA is the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 16. Let ~P` = (V, L, ~E) be a minimal latent polytree and let Ti = (Vi, Li, Ei) for i = 1, ..., nr be the collapsed quasi-skeletons of the rooted subtrees of ~P`. Then HCMA outputs the collapsed quasi-skeleton of ~P`. 3.4 Task 4: Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes) After performing the HCMA, the output is the collapsed quasi-skeleton of the latent polytree, thus, the structure of the hidden nodes within each hidden cluster is not known yet. Note that the restriction of the original polytree to the closure of a hidden cluster is a smaller polytree. The goal of this task is to recover the structure of the hidden clusters by focusing on each individual closure (i.e., recover Type-I hidden nodes and their connectivities). Given the closure of a hidden cluster, the basic strategy is to detect one root of the hidden cluster along with the visible nodes (if any) linked to this root. Then, we label such a root as a visible node, add edges between this node and its visible neighbors, and subsequently apply the same strategy recursively to the descendants of such a detected root. Since we focus on the closure of a specific hidden cluster, say C, we define the following sets Ṽr = Vr ∩ N(C) for r = 1, ..., nr where nr is the number of rooted subtrees in the latent polytree and Vr are the sets of visible nodes in each rooted subtree (obtained from Task 1). A fundamental result for detection of a root of a hidden cluster is the following theorem. See Supplementary Material for the proof of this theorem. Theorem 17. Let ~P` be a minimal latent polytree and let ~T r = (Vr, Lr, ~Er) with r = 1, ..., nr be all the rooted subtrees of ~P`. Let C be a hidden cluster in the collapsed quasi-skeleton of ~P`. Define Ṽr := Vr ∩ N(C) for r = 1, ..., nr where nr is the number of roots in ~P`. Then, Tr contains a hidden root of C if and only if Ṽr , ∅ and for all Ṽr′ with r′ , r we have |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1. To make the application of this theorem more clear, consider the latent polytree introduced in Figure 3 (True). After applying the first three tasks, we obtain the collapsed quasi-skeleton of the latent polytree as depicted in Figure 3 (Task 3). Observe that the rooted subtrees ~T 1 (with root y1) and ~T 2 (with root y2) satisfy the conditions of Theorem 17 indicating that they contain a root of the hidden cluster. The following lemma allows one to find the visible nodes linked to a hidden root in the closure of a hidden cluster. See Supplementary Material for the proof of this lemma. Lemma 18. Let ~P` be a minimal latent polytree. Consider a hidden root yh of a hidden cluster C in the collapsed quasi-skeleton of ~P` where yh belongs to the rooted subtree Tr = (Vr, Lr, ~Er). Define Ṽr′ := Vr′ ∩ N(C) for r′ = 1, ..., nr where nr is the number of roots in ~P`. The visible nodes linked to yh are given by the set W \W where I := {r} ∪ {r′such that |Ṽr \ Ṽr′ | = |Ṽr′ \ Ṽr | = 1}, W := ⋃ i∈I Ṽi, W := ⋃ i<I Ṽi. We follow the example of Figure 3 to show the steps of Task 4 in more details. Without loss of generality, choose Tr = T1. Consider the closure of CA′ obtained at the end of Task 3 and then apply Lemma 18 to obtain I = {1, 2}, W = {y1, y2, y10, y12, y13, y14, y15, y16, y17}, W = {y5, y6, y9, y11, y12, y13, y14, y15, y16, y17}, and thus W \W = {y1, y2, y10}. Therefore, the visible nodes linked to the hidden root in T1 are y1, y2 and y10. Now we introduce the Hidden Cluster Learning Algorithm (HCLA), presented in Algorithm 2, to learn the structure of a hidden cluster. Again, consider the closure of the hidden cluster CA′ as depicted in Figure 4 (Task 4a) which we obtained at the end of Task 3. Then, apply Hidden Node Detection procedure to CA′ and observe that the output at the end of Step 23 of Algorithm 2 is in Figure 4 (Task 4b). The output of the merging in Steps 24-27 is depicted in Figure 4 (Task 4c) and the output of the merging in Step 28 is depicted in Figure 4 (Task 4d). Now, we can apply the same procedure recursively to the remaining hidden clusters to obtain the final output of Task 4, the quasi-skeleton of the polytree, as depicted in Figure 3 (Task 4). Here, we show that the output of HCLA is the quasi-skeleton of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 19. Let ~P` = (V, L, ~E) be a minimal latent polytree. When HCLA is applied to all hidden clusters of the collapsed quasi-skeleton of ~P`, the output P = (V, E) is the quasi-skeleton of ~P`. Furthermore, HCLA also outputs, for each pair yi, y j ∈ V, the relation I(yi, ∅, y j) if and only if the path connecting yi and y j in ~P` contains an inverted fork. Algorithm 2 Hidden Cluster Learning Algorithm Input the collapsed quasi-skeleton of a minimal polytree ~P`, collapsed quasi-skeletons of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr, and the set of the hidden clusters P = {C1, ...,CnC } Output P and the independence relations of the form I(ya, ∅, yb) or ¬I(ya, ∅, yb) for all nodes ya, yb ∈ ⋃ i Vi 1: while P , ∅ do 2: Call Hidden Node Detection Procedure(C1) where C1 is the first element of P 3: end while 4: procedure Hidden Node Detection(C) 5: Compute Ṽi = Vi ∩ N(C) 6: Find Ṽr which satisfies |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1 for all r′ , r (as in Theorem 17) 7: Initialize W := Ṽr, W := ∅, and I := {r} 8: for all i = 1, ..., nr with i , r do 9: if |Ṽr \ Ṽi| = 1 and |Ṽi \ Ṽr | = 1 (as in Lemma 18) then 10: W := W ∪ Ṽi and I := I ∪ {i} 11: else 12: W := W ∪ Ṽi 13: end if 14: end for 15: A new hidden node yh is revealed 16: Add yh to all the rooted trees Ti with i ∈ I, namely Vi := Vi ∪ {yh} 17: Add the independence relation ¬I(yh, ∅, y) for all y ∈ Vi with i ∈ I, and add the independence relation I(yh, ∅, y) for all other nodes y 18: Link all nodes in W \W to yh in all Ti with i ∈ I, namely Ei := Ei ∪ { {yh, y} | y ∈ W \W } 19: for all i ∈ I do 20: create nk = |W ∩W | new clusters: C(i)1 , ...,C (i) nk 21: link yh to C (i) 1 , ...,C (i) nk 22: link each cluster C(i)1 , ...,C (i) nk to a distinct element in W ∩W 23: end for 24: while ∃ya, yb ∈ N(C(i)j ) ∪ N(C (i) k ) such that ya, yb ∈ Ṽm where m < I do 25: merge the two hidden clusters C(i)j and C (i) k 26: update the structure of Ti with the new hidden clusters 27: end while 28: Let P = (V,P, E) be the output of HCMA applied to Ti = (Vi, Li, Ei), for i = 1, ..., nr 29: end procedure 3.5 Task 5: Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations) Once the quasi-skeleton of the latent polytree has been obtained, the only missing nodes to recover the full skeleton are the type-II hidden nodes of the original polytree. Interestingly, the detection of such hidden nodes can be performed concurrently with the recovery of the edge orientations. In particular, we apply Rebane and Pearl’s algorithm in [13] to orient the edges of the quasi-skeleton of the polytree. Then, we have that the edges receiving double orientations imply the presence of a type-II hidden node between the two linked nodes. Thus, the Hidden Root Recovery Algorithm (HRRA), presented in Algorithm 3, is simply an implementation of Rebane and Pearl’s algorithm (Steps 1-4), as depicted in Figure 4 (Task 5a), with the additional detection of type-II hidden nodes (Steps 5-10). As a consequence, we have this final result stated in Theorem 20 to prove that HRRA outputs the pattern of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 20. Let ~P` be a minimal latent polytree. When the input is the quasi-skeleton of ~P` with the independence statements of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all the pairs of nodes yi and y j, the output of HRRA is the pattern of ~P`. For a complete step by step example of this algorithm see the Supplementary Material. Algorithm 3 Hidden Root Recovery Algorithm Input P = (V, E), the quasi-skeleton of a latent polytree, and the independence relations of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all nodes yi, y j ∈ V Output the partially directed polytree P̄ = (V, E, ~E) 1: while additional edges are oriented do 2: if yi − yk, y j − yk ∈ E and I(yi, ∅, y j), then add yi → yk and y j → yk to ~E 3: if yi → yk ∈ ~E, yk − y j ∈ E and ¬I(yi, ∅, y j), then add yk → y j to ~E 4: end while 5: Remove the edges that are oriented in ~E from E 6: for all yi, y j such that yi → y j, y j → yi ∈ ~E do 7: a new hidden node of Type-II is detected which is a parent of yi and y j 8: remove yi → y j, y j → yi from ~E 9: add a new node yh to V 10: add yh → y j, yh → yi to ~E 11: end for 4 Conclusions and Discussion We have provided an algorithm to reconstruct the pattern of a latent polytree graphical model. The algorithm only requires the second and third order statistics of the observed variables and no prior information about the number and location of the hidden nodes is assumed. An important property of the proposed approach is that the algorithm is sound under specific degree conditions on the hidden variables. If such degree conditions are not met, it is shown, in the Supplementary Material, that there exists another latent polytree with fewer number of hidden nodes entailing the same independence relations. In this sense, the proposed algorithm always recover a minimal graphical model in the sense of hidden nodes following a form of Occam’s razor principle. Future work will study how this algorithm performs under limited amount of data and how to deal with situations when the measurements are not exact. Acknowledgments This work has been partially supported by NSF (CNS CAREER #1553504).
1. What is the focus of the paper regarding latent variable models? 2. What are the key contributions and novel aspects of the proposed method? 3. What are the strengths of the paper in terms of its theoretical analysis? 4. Are there any concerns or limitations regarding the applicability of the proposed approach? 5. Does the paper provide sufficient motivation and examples for its significance?
Review
Review The paper considers learning a specific type of latent variable model from conditional independence relations, a latent polytree network. The paper shows that when the underlying network has a tree structure and when certain degree conditions are met for the latent variables, the latent variable model can be exactly recovered. They show the degree conditions are complete in the sense that when they are not met, there is a polytree network with fewer latent variables which satisfy the same conditional independence relations. An algorithm is provided for learning latent polytrees when these conditions are met. In general the paper is well written. The results are novel, sound, and theoretically interesting. The paper doesn't provide much motivation for the approach so it's difficult to gauge the significance. It's not completely obvious what a realistic scenario would be where you can assume both a tree structure and the given degree conditions on the latent nodes, but don't already understand the exact latent structure. No motivating examples were provided and there are no empirical results.
NIPS
Title An Algorithm to Learn Polytree Networks with Hidden Nodes Abstract Ancestral graphs are a prevalent mathematical tool to take into account latent (hidden) variables in a probabilistic graphical model. In ancestral graph representations, the nodes are only the observed (manifest) variables and the notion of m-separation fully characterizes the conditional independence relations among such variables, bypassing the need to explicitly consider latent variables. However, ancestral graph models do not necessarily represent the actual causal structure of the model, and do not contain information about, for example, the precise number and location of the hidden variables. Being able to detect the presence of latent variables while also inferring their precise location within the actual causal structure model is a more challenging task that provides more information about the actual causal relationships among all the model variables, including the latent ones. In this article, we develop an algorithm to exactly recover graphical models of random variables with underlying polytree structures when the latent nodes satisfy specific degree conditions. Therefore, this article proposes an approach for the full identification of hidden variables in a polytree. We also show that the algorithm is complete in the sense that when such degree conditions are not met, there exists another polytree with fewer number of latent nodes satisfying the degree conditions and entailing the same independence relations among the observed variables, making it indistinguishable from the actual polytree. 1 Introduction The presence of unmeasured variables is a fundamental challenge in discovery of causal relationships [1, 2, 3]. When the causal diagram is a Directed Acyclic Graph (DAG) with unmeasured variables, a common approach is to use ancestral graphs to describe the independence relations among the measured variables [2]. The main advantage of ancestral graphs is that they involve only the measured variables and successfully encode all their conditional independence relations via m-separation. Furthermore, complete algorithms have been devised to obtain ancestral graphs from observational data, e.g., the work in [3]. However, recovering the actual structure of the original DAG is something that ancestral graphs somehow circumvent. For example, it might be known that the actual causal diagram has a polytree structure including the hidden nodes, but the ancestral graph associated with the measured variables might not even be a polytree [4]. Instead, the recovery of causal diagrams including the location of their hidden variables is a very challenging task and algorithmic solutions are available only for specific scenarios [5, 6, 7, 8]. For example, in the case of specific distributions (i.e., Gaussian and Binomial) when the causal diagram is known to be a rooted tree, the problem has been solved by exploiting the additivity of a metric along the paths of the tree [6, 7, 8, 9]. In the case of generic distributions, though, additive metrics might be too difficult to define or cannot be defined in general. Furthermore, rooted trees can be considered a rather limiting class of networks 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. since they represent probability distributions which can only be factorized according to second order conditional distributions [10]. This article makes a novel contribution towards the recovery of more general causal diagrams. Indeed, it provides an algorithm to learn causal diagrams making no assumptions on the underlying probability distribution, and considering polytree structures which can represent factorizations involving conditional distributions of arbitrarily high order. Furthermore, it is shown that a causal diagram with a polytree structure can be exactly recovered if and only if each hidden node satisfies the following conditions: (i) the node has at least two children; (ii) if the node has exactly one parent, such a parent is not hidden; (iii) the node has at least degree 3, or each of its two children has at least another parent. The provided algorithm recovers every polytree structure with hidden nodes satisfying these conditions, and, remarkably, makes use only of third order statistics. If the degree conditions are not satisfied, then it is shown that there exists another polytree with fewer number of hidden random variables which entails the same independence relations among the observed variables. Indeed, in this case, when no additional information/observations are provided, no test can be constructed to determine the true structure. Another main advantage of this proposed approach lies in the fact that it follows a form of Occam’s razor principle since in the case where the degree conditions on the hidden nodes are not met, then a polytree with minimal number of hidden nodes is selected. We find this property quite relevant in application scenarios since Occam’s razor is arguably one of the cardinal principles in all sciences. 2 Preliminaries, Assumptions and Problem Definition In order to formulate our problem, we first introduce a generalization of the notions of directed and undirected graphs (see for example [11, 12]) which also considers a partition of the set of nodes into visible and hidden nodes. Definition 1 (Latent partially directed graph). A latent partially directed graph Ḡ` is a 4-ple (V, L, E, ~E) where • the disjoint sets V and L are named the set of visible nodes and the set of hidden nodes, • the set E is the set of undirected edges containing unordered pairs of (V ∪ L) × (V ∪ L), • the set ~E is the set of directed edges containing ordered pairs of (V ∪ L) × (V ∪ L). We denote the unordered pair of two elements yi, y j ∈ V ∪ L as yi − y j, and the ordered pair of yi, y j (when yi precedes y j) as yi → y j. In a latent partially directed graph the sets E and ~E do not share any edges. Namely, yi − y j ∈ E implies that both yi → y j and y j → yi are not in ~E. A latent partially directed graph is a fully undirected graph when ~E = ∅, and we simplify the notation by writing G` = (V, L, E). Similarly, when E = ∅, we have a fully directed graph, and we denote it by ~G` = (V, L, ~E). Furthermore, if we drop the distinction between visible and hidden nodes and consider V ∪ L as the set of nodes, we recover the standard notions of undirected and directed graphs. Thus, latent partially directed graphs inherit, in a natural way, all notions associated with standard graphs (e.g., path, degree, neighbor, etc., see for example [11]). In the scope of this article, we denote degree, outdegree, indegree, children, parents, descendants and ancestors of a node y in graph ~G using deg~G (y), deg + ~G (y), deg−~G (y), ch~G (y), pa~G (y), de~G (y) and an~G (y), respectively (see [11, 12] for precise definitions). Furthermore, the notion of restriction of a graph to a subset of nodes follows immediately. Definition 2 (Restriction of a latent partially directed graph). The restriction of a latent partially directed graph Ḡ` = (V, L, E, ~E) with respect to a set of nodes A ⊆ V ∪ L is the latent partially directed graph obtained by considering only the nodes in A and the edges linking pairs of nodes which are both in A. Moreover, a latent partially directed graph is called a latent partially directed tree when there exists exactly one path connecting any pair of nodes. Definition 3 (Latent partially directed tree). A latent partially directed tree ~P` is a latent partially directed graph Ḡ` = (V, L, E, ~E) where every pair of nodes yi, y j ∈ V ∪ L is connected by exactly one path. Trivially, latent partially directed trees generalize the notions of undirected trees and polytrees (directed trees) [13]. In a latent partially directed tree, we define a hidden cluster as a group of hidden nodes that are connected to each other via a path constituted exclusively of hidden nodes. Definition 4 (Hidden cluster). A hidden cluster in a latent partially directed tree ~P` = (V, L, E, ~E) is a set C ⊆ L such that for each distinct pair of nodes yi, y j ∈ C the unique path connecting them contains only nodes in C and no node in C is linked to a node which is in L \C. Observe that each node in a hidden cluster has neighbors which are either visible or hidden nodes of the same cluster. Figure 1 (a) depicts a latent directed tree (or a latent polytree) and its hidden clusters C1 and C2 highlighted by the dotted lines. 14 Furthermore, we introduce the set of (visible) neighbors of a hidden cluster, its closure and its degree. Definition 5 (Neighbors, closure, and degree of a hidden cluster). In a latent partially directed tree, the set of all visible nodes linked to any of the nodes of a hidden cluster C is the set of neighbors of C and is denoted by N(C). We define the degree of the hidden cluster as |N(C)|, namely, the number of neighbors of the cluster. We refer to the restriction of a latent polytree to a hidden cluster and its neighbors as the closure of the hidden cluster. Observe that the neighbors of C1 are shaded with orange color in Figure 1 (a). We also remind the notion of a root node and define the notion of a root of a hidden cluster. Definition 6 (Root of a latent polytree, and root of a hidden cluster in a latent polytree). In a latent polytree ~P` = (V, L, ~E), a root is a node yr ∈ V ∪ L with indegree equal to zero. Also, we define any root of the restriction of the polytree to one of its hidden clusters as the root of the hidden cluster. For example, in Figure 1 (a), node y1 is a root of the latent polytree and node yh3 is a root of the hidden cluster C1. In this article, we make extensive use of the restriction of a polytree to the descendants of one of its roots. We define such a restriction as the rooted subtree of the polytree associated with that root. Additionally, given a latent partially directed tree, we define its collapsed representation by replacing each hidden cluster with a single hidden node. The formal definition is as follows and Figure 1 (b) depicts the collapsed representation of the latent polytree of Figure 1 (a). Definition 7 (Collapsed representation). We define the collapsed representation of ~P` = (V, L, E, ~E) as the latent partially directed tree ~Pc = (V, Lc, Ec, ~Ec) where nc is the number of hidden clusters C1, ...,Cnc , and Lc := C1 ∪ ... ∪Cnc , and Ec := {yi − y j ∈ E | yi, y j ∈ V} ∪ {yi −Ck | ∃y j ∈ Ck, yi − y j ∈ E} ∪ {Ck − y j | ∃yi ∈ Ck, yi − y j ∈ E} ~Ec := {yi → y j ∈ ~E | yi, y j ∈ V} ∪ {yi → Ck | ∃y j ∈ Ck, yi → y j ∈ ~E} ∪ {Ck → y j | ∃yi ∈ Ck, yi → y j ∈ ~E}. In this article, we show the cases where graphical models with polytree structures can be recovered from the independence relations involving only visible nodes. Specifically, we assume that a polytree is a perfect map (see [14, 12]) for a probabilistic model defined over the variables V ∪ L where V and L are disjoint sets. We find conditions under which it is possible to recover information about the perfect map of the probabilistic model considering only independence relations of the form I(yi, ∅, y j) (read yi and y j are independent) and I(yi, yk, y j) (read yi and y j are conditionally independent given yk) for all nodes yi, y j, yk ∈ V . One of the fundamental requirements of solving this problem is that all hidden nodes need to satisfy certain degree conditions summarized in the following definition. Definition 8 (Minimal latent polytree). A latent polytree ~P` = (V, L, ~E) is minimal if every hidden node yh ∈ L satisfies one of the following conditions: • deg+~P` (yh) ≥ 2 and deg~P` (yh) ≥ 3 and if |pa~P` (yh) | = 1, then pa~P` (yh) ⊆ V; • deg+~P` (yh) = 2 and deg − ~P` (yh) = 0 and deg−~P` ( yc1 ) , deg−~P` ( yc2 ) ≥ 2 where ch~P` (yh) = {yc1 , yc2 }. Note that the nodes yh2 , yh4 , yh5 , yh7 in Figure 1 (a) do not satisfy the minimality conditions and therefore the hidden polytree is not minimal. Instead, Figure 1 (c) shows a minimal latent polytree. The algorithm we propose to recover the structure of a latent polytree can be decomposed in several tasks and the hidden nodes which are roots with outdegree equal to 2 and at least one visible child require to be dealt with in a special way in the last task of the algorithm. Therefore, we define the following two types of hidden nodes to make this distinction. Definition 9 (Type-I and type-II hidden nodes). In a minimal latent polytree, we classify a hidden node yh as type-II when deg~G (yh) = 2 with at least one visible child. All other hidden nodes are classified as type-I. In the minimal latent polytree of Figure 2 (a), the hidden nodes yh2 and yh3 are type-II hidden nodes, while all the other hidden nodes are type-I. We define the quasi-skeleton of a minimal latent polytree to deal with type-II hidden nodes separately. Definition 10 (Quasi-skeleton of a latent polytree). In a minimal latent polytree ~P` = (V, L, ~E), the quasi-skeleton of ~P` is the undirected graph obtained by removing the orientation of all edges in ~P`, and removing all the type-II hidden nodes and then linking its two children together. In Figure 2 (b), we have the quasi-skeleton of the polytree of Figure 2 (a). Observe that we can easily define the collapsed representation of a quasi-skeleton of a latent polytree by finding the quasi-skeleton first and then finding its collapsed representation as in Figure 2 (c). As it is well known in the theory of graphical models, in the general case, from a set of conditional independence statements (formally, a semi-graphoid) faithful to a Directed Acyclic Graph (DAG), it is not possible to recover the full DAG [15, 1]. What can be recovered for sure is the pattern of the DAG, namely the skeleton and the v-structures (i.e., yi → yk ← y j) of the DAG [15, 1]. In this article, we show that, similarly, in the case of a minimal latent polytree, we are able to recover the pattern of the polytree from the independence statements involving only the visible variables. Definition 11 (Pattern of a polytree). Let ~P = (N, ~E) be a polytree. The pattern of ~P is a partially directed graph where the orientation of all the v-structures (i.e., yi → yk ← y j) are known and as many as the remaining undirected edges are oriented in such a way that the other alternative orientation would result in a v-structure. Now we have all the necessary tools to formulate the problem. Problem Formulation. Assume a semi-graphoid defined over a set of variables V∪L. Let the latent polytree ~P` = (V, L, ~E) be faithful to the semi-graphoid and assume that the nodes in L satisfy the minimality conditions. Recover the pattern of ~P` from conditional independence relations involving only nodes in V . Remark 12. The proposed solution makes only use of the conditional independence relations of the form I(yi, ∅, y j) and I(yi, yk, y j) for all yi, y j, yk ∈ V. 3 An Algorithm to Reconstruct Minimal Hidden Polytrees Our algorithm for learning the pattern of a minimal latent polytree is made of the following 5 tasks: 1. Using the independence statements involving the visible nodes, determine the number of rooted subtrees in the latent polytree and their respective sets of visible nodes; 2. Given all the visible nodes belonging to each rooted subtree, determine the collapsed quasiskeleton of each rooted subtree; 3. Merge the overlapping hidden clusters in the collapsed quasi-skeleton of each rooted subtree to obtain the collapsed quasi-skeleton of the latent polytree; 4. Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes); 5. Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations). Figure 3 shows the stage of the recovery of the polytree structure at the end of each task. The following subsections provide more details about each task, but the most technical results are in the Supplemental Material. We stress that the first two tasks mostly leverage previous work about rooted trees and the main novelty of this article lies in tasks 3, 4 and 5. 3.1 Task 1: Determine the visible nodes of each rooted subtree This first task can be performed by the Pairwise-Finite Distance Algorithm (PFDA), presented in [16] and reported in the Supplementary Material as Algorithm 4. As shown in [16], PFDA takes as input the set of visible nodes of a latent polytree and outputs sets of visible nodes with the property that each set corresponds to the visible descendants of a root of the latent polytree, when the polytree is minimal. In the following theorem, we show that the output of PFDA applied to the independence statements is the same as described above. See Supplementary Material for the proof of this theorem. Theorem 13. Consider a latent polytree ~P` = (V, L, ~E) faithful to a probabilistic model. Assume that the hidden nodes in L satisfy the minimality conditions. Then PFDA, applied to the independence statements of the probabilistic model with the form I(yi, ∅, y j) for all yi, y j ∈ V, outputs a collection of sets, such that each of them is given by all the visible descendants of a root of ~P`. 3.2 Task 2: Determine the collapsed quasi-skeleton of each rooted subtree The second task is performed by the Reconstruction Algorithm for Latent Rooted Trees in [17]. We report it as Algorithm 5 in the Supplementary Material for completeness. The input of this algorithm is the set Vr of the visible nodes belonging to a rooted subtree Tr and independence relations of the form I(yi, yk, y j) or ¬I(yi, yk, y j) for distinct yi, y j, yk ∈ Vr. Its output is the collapsed quasi-skeleton of Tr. Thus, we can call this algorithm on all of the sets of visible nodes V1, ...,Vnr where nr is the number of roots, obtained from Task 1, and find the collapsed quasi-skeletons of all the rooted subtrees of the latent polytree. This result is formalized in the following theorem. See Supplementary Material for the proof of this theorem. Theorem 14. Let ~P` = (V, L, ~E) be a minimal latent polytree. Consider a root yr of ~P` and let Vr = V ∩ de~P` (yr). The output of Reconstruction Algorithm for Latent Rooted Trees applied to Vr is the collapsed quasi-skeleton of the rooted subtree with root node yr. 3.3 Task 3: Merge the overlapping hidden clusters of the collapsed rooted trees By applying the Reconstruction Algorithm for Latent Rooted Trees on each set of visible nodes in the same rooted tree, we have, as an output, the collapsed quasi-skeletons of all rooted subtrees in the original hidden polytree. In the general case, some hidden clusters in the collapsed quasi-skeleton of the rooted subtrees might overlap, namely, they might share some hidden nodes in the original hidden polytree. The following theorem provides a test on the sets of visible nodes of the rooted subtrees in a minimal latent polytree to determine if two hidden clusters in two distinct collapsed quasi-skeletons of two rooted subtrees belong to the same cluster in the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 15. Consider a minimal latent polytree ~P`. Let C1 and C2 be two distinct hidden clusters in the collapsed quasi-skeletons of two rooted subtrees of ~P`. If the set of neighbors of C1 and the set of neighbors of C2 share at least a pair of visible nodes, i.e., |N(C1) ∩ N(C2)| ≥ 2, then the nodes in C1 and C2 belong to the same hidden cluster in the collapsed quasi-skeleton of ~P`. This theorem is the enabling result for the Hidden Cluster Merging Algorithm (HCMA), presented in Algorithm 1, which merges all the collapsed quasi-skeletons associated with the individual rooted subtrees, obtained from Task 2, into the collapsed quasi-skeleton of the polytree. This algorithm starts with the collapsed quasi-skeleton of the rooted subtrees, then finds pairs of clusters that overlap by testing if they share at least one pair of visible neighbors (see Theorem 15), and then merges the overlapping pairs. This procedure is repeated until no clusters are merged anymore. Algorithm 1 Hidden Cluster Merging Algorithm Input the collapsed quasi-skeleton of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr Output the collapsed quasi-skeleton P of the latent polytree 1: Initialize the set of clusters P with the hidden clusters of all Ti, i.e., P := {{C1}, {C2}, ..., {Ck}} 2: while there are two elements Ci,C j ∈ P such that |N(Ci) ∩ N(C j)| ≥ 2 do 3: remove Ci,C j from P and add Ci ∪C j to P 4: define N(Ci ∪C j) := N(Ci) ∪ N(C j) 5: end while 6: Define the polytree P = (∪iVi,P, E) where E := {{ya, yb} | ∃ i : ya, yb ∈ Vi, ya − yb ∈ Ei} ∪ {{ya,Cb} | ∃ i, h : ya ∈ Vi, yh ∈ Li, Li ⊆ Cb,Cb ∈ P, ya − yh ∈ Ei} The following theorem guarantees that, for a minimal latent polytree, the output of HCMA is the collapsed quasi-skeleton of the polytree. See Supplementary Material for the proof of this theorem. Theorem 16. Let ~P` = (V, L, ~E) be a minimal latent polytree and let Ti = (Vi, Li, Ei) for i = 1, ..., nr be the collapsed quasi-skeletons of the rooted subtrees of ~P`. Then HCMA outputs the collapsed quasi-skeleton of ~P`. 3.4 Task 4: Determine the quasi-skeleton of the latent polytree from the collapsed quasi-skeleton of the latent polytree (recover type-I hidden nodes) After performing the HCMA, the output is the collapsed quasi-skeleton of the latent polytree, thus, the structure of the hidden nodes within each hidden cluster is not known yet. Note that the restriction of the original polytree to the closure of a hidden cluster is a smaller polytree. The goal of this task is to recover the structure of the hidden clusters by focusing on each individual closure (i.e., recover Type-I hidden nodes and their connectivities). Given the closure of a hidden cluster, the basic strategy is to detect one root of the hidden cluster along with the visible nodes (if any) linked to this root. Then, we label such a root as a visible node, add edges between this node and its visible neighbors, and subsequently apply the same strategy recursively to the descendants of such a detected root. Since we focus on the closure of a specific hidden cluster, say C, we define the following sets Ṽr = Vr ∩ N(C) for r = 1, ..., nr where nr is the number of rooted subtrees in the latent polytree and Vr are the sets of visible nodes in each rooted subtree (obtained from Task 1). A fundamental result for detection of a root of a hidden cluster is the following theorem. See Supplementary Material for the proof of this theorem. Theorem 17. Let ~P` be a minimal latent polytree and let ~T r = (Vr, Lr, ~Er) with r = 1, ..., nr be all the rooted subtrees of ~P`. Let C be a hidden cluster in the collapsed quasi-skeleton of ~P`. Define Ṽr := Vr ∩ N(C) for r = 1, ..., nr where nr is the number of roots in ~P`. Then, Tr contains a hidden root of C if and only if Ṽr , ∅ and for all Ṽr′ with r′ , r we have |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1. To make the application of this theorem more clear, consider the latent polytree introduced in Figure 3 (True). After applying the first three tasks, we obtain the collapsed quasi-skeleton of the latent polytree as depicted in Figure 3 (Task 3). Observe that the rooted subtrees ~T 1 (with root y1) and ~T 2 (with root y2) satisfy the conditions of Theorem 17 indicating that they contain a root of the hidden cluster. The following lemma allows one to find the visible nodes linked to a hidden root in the closure of a hidden cluster. See Supplementary Material for the proof of this lemma. Lemma 18. Let ~P` be a minimal latent polytree. Consider a hidden root yh of a hidden cluster C in the collapsed quasi-skeleton of ~P` where yh belongs to the rooted subtree Tr = (Vr, Lr, ~Er). Define Ṽr′ := Vr′ ∩ N(C) for r′ = 1, ..., nr where nr is the number of roots in ~P`. The visible nodes linked to yh are given by the set W \W where I := {r} ∪ {r′such that |Ṽr \ Ṽr′ | = |Ṽr′ \ Ṽr | = 1}, W := ⋃ i∈I Ṽi, W := ⋃ i<I Ṽi. We follow the example of Figure 3 to show the steps of Task 4 in more details. Without loss of generality, choose Tr = T1. Consider the closure of CA′ obtained at the end of Task 3 and then apply Lemma 18 to obtain I = {1, 2}, W = {y1, y2, y10, y12, y13, y14, y15, y16, y17}, W = {y5, y6, y9, y11, y12, y13, y14, y15, y16, y17}, and thus W \W = {y1, y2, y10}. Therefore, the visible nodes linked to the hidden root in T1 are y1, y2 and y10. Now we introduce the Hidden Cluster Learning Algorithm (HCLA), presented in Algorithm 2, to learn the structure of a hidden cluster. Again, consider the closure of the hidden cluster CA′ as depicted in Figure 4 (Task 4a) which we obtained at the end of Task 3. Then, apply Hidden Node Detection procedure to CA′ and observe that the output at the end of Step 23 of Algorithm 2 is in Figure 4 (Task 4b). The output of the merging in Steps 24-27 is depicted in Figure 4 (Task 4c) and the output of the merging in Step 28 is depicted in Figure 4 (Task 4d). Now, we can apply the same procedure recursively to the remaining hidden clusters to obtain the final output of Task 4, the quasi-skeleton of the polytree, as depicted in Figure 3 (Task 4). Here, we show that the output of HCLA is the quasi-skeleton of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 19. Let ~P` = (V, L, ~E) be a minimal latent polytree. When HCLA is applied to all hidden clusters of the collapsed quasi-skeleton of ~P`, the output P = (V, E) is the quasi-skeleton of ~P`. Furthermore, HCLA also outputs, for each pair yi, y j ∈ V, the relation I(yi, ∅, y j) if and only if the path connecting yi and y j in ~P` contains an inverted fork. Algorithm 2 Hidden Cluster Learning Algorithm Input the collapsed quasi-skeleton of a minimal polytree ~P`, collapsed quasi-skeletons of the rooted subtrees Ti = (Vi, Li, Ei) for i = 1, ..., nr, and the set of the hidden clusters P = {C1, ...,CnC } Output P and the independence relations of the form I(ya, ∅, yb) or ¬I(ya, ∅, yb) for all nodes ya, yb ∈ ⋃ i Vi 1: while P , ∅ do 2: Call Hidden Node Detection Procedure(C1) where C1 is the first element of P 3: end while 4: procedure Hidden Node Detection(C) 5: Compute Ṽi = Vi ∩ N(C) 6: Find Ṽr which satisfies |Ṽr \ Ṽr′ | > 1 or |Ṽr′ \ Ṽr | ≤ 1 for all r′ , r (as in Theorem 17) 7: Initialize W := Ṽr, W := ∅, and I := {r} 8: for all i = 1, ..., nr with i , r do 9: if |Ṽr \ Ṽi| = 1 and |Ṽi \ Ṽr | = 1 (as in Lemma 18) then 10: W := W ∪ Ṽi and I := I ∪ {i} 11: else 12: W := W ∪ Ṽi 13: end if 14: end for 15: A new hidden node yh is revealed 16: Add yh to all the rooted trees Ti with i ∈ I, namely Vi := Vi ∪ {yh} 17: Add the independence relation ¬I(yh, ∅, y) for all y ∈ Vi with i ∈ I, and add the independence relation I(yh, ∅, y) for all other nodes y 18: Link all nodes in W \W to yh in all Ti with i ∈ I, namely Ei := Ei ∪ { {yh, y} | y ∈ W \W } 19: for all i ∈ I do 20: create nk = |W ∩W | new clusters: C(i)1 , ...,C (i) nk 21: link yh to C (i) 1 , ...,C (i) nk 22: link each cluster C(i)1 , ...,C (i) nk to a distinct element in W ∩W 23: end for 24: while ∃ya, yb ∈ N(C(i)j ) ∪ N(C (i) k ) such that ya, yb ∈ Ṽm where m < I do 25: merge the two hidden clusters C(i)j and C (i) k 26: update the structure of Ti with the new hidden clusters 27: end while 28: Let P = (V,P, E) be the output of HCMA applied to Ti = (Vi, Li, Ei), for i = 1, ..., nr 29: end procedure 3.5 Task 5: Obtain the pattern of the latent polytree from the recovered quasi-skeleton of the latent polytree (recover type-II hidden nodes and edge orientations) Once the quasi-skeleton of the latent polytree has been obtained, the only missing nodes to recover the full skeleton are the type-II hidden nodes of the original polytree. Interestingly, the detection of such hidden nodes can be performed concurrently with the recovery of the edge orientations. In particular, we apply Rebane and Pearl’s algorithm in [13] to orient the edges of the quasi-skeleton of the polytree. Then, we have that the edges receiving double orientations imply the presence of a type-II hidden node between the two linked nodes. Thus, the Hidden Root Recovery Algorithm (HRRA), presented in Algorithm 3, is simply an implementation of Rebane and Pearl’s algorithm (Steps 1-4), as depicted in Figure 4 (Task 5a), with the additional detection of type-II hidden nodes (Steps 5-10). As a consequence, we have this final result stated in Theorem 20 to prove that HRRA outputs the pattern of the latent polytree. See Supplementary Material for the proof of this theorem. Theorem 20. Let ~P` be a minimal latent polytree. When the input is the quasi-skeleton of ~P` with the independence statements of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all the pairs of nodes yi and y j, the output of HRRA is the pattern of ~P`. For a complete step by step example of this algorithm see the Supplementary Material. Algorithm 3 Hidden Root Recovery Algorithm Input P = (V, E), the quasi-skeleton of a latent polytree, and the independence relations of the form I(yi, ∅, y j) or ¬I(yi, ∅, y j) for all nodes yi, y j ∈ V Output the partially directed polytree P̄ = (V, E, ~E) 1: while additional edges are oriented do 2: if yi − yk, y j − yk ∈ E and I(yi, ∅, y j), then add yi → yk and y j → yk to ~E 3: if yi → yk ∈ ~E, yk − y j ∈ E and ¬I(yi, ∅, y j), then add yk → y j to ~E 4: end while 5: Remove the edges that are oriented in ~E from E 6: for all yi, y j such that yi → y j, y j → yi ∈ ~E do 7: a new hidden node of Type-II is detected which is a parent of yi and y j 8: remove yi → y j, y j → yi from ~E 9: add a new node yh to V 10: add yh → y j, yh → yi to ~E 11: end for 4 Conclusions and Discussion We have provided an algorithm to reconstruct the pattern of a latent polytree graphical model. The algorithm only requires the second and third order statistics of the observed variables and no prior information about the number and location of the hidden nodes is assumed. An important property of the proposed approach is that the algorithm is sound under specific degree conditions on the hidden variables. If such degree conditions are not met, it is shown, in the Supplementary Material, that there exists another latent polytree with fewer number of hidden nodes entailing the same independence relations. In this sense, the proposed algorithm always recover a minimal graphical model in the sense of hidden nodes following a form of Occam’s razor principle. Future work will study how this algorithm performs under limited amount of data and how to deal with situations when the measurements are not exact. Acknowledgments This work has been partially supported by NSF (CNS CAREER #1553504).
1. How does the reviewer assess the quality of the paper's writing and the complexity of the proposed algorithm? 2. What are the limitations of the proposed approach, particularly in terms of its applicability in machine learning? 3. Despite these limitations, how does the reviewer view the contribution and potential applications of the work in specific domains?
Review
Review The text is very well-written; the algorithm is a bit involved (as it tests several cases), but a careful reader can follow all steps provided. The class of learnable polytrees is somewhat very limited; in particular, the assumption that the number of hidden variables is known limits the applicability of the method (from a machine learning perspective). Yet I find the work an important contribution for the field, and can certainly find application in some specific domain.
NIPS
Title On Non-Linear operators for Geometric Deep Learning Abstract This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on R. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. N/A This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lpω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on Rd. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lpω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. 1 Introduction Given a physical domain M and measurements f : M → Y observed over it, one is often interested in processing intrinsic information from f , i.e. consistent with the symmetries of the domain. Let M denote an operator, it can be seen as a non-linear operator acting on measurements. In words, if two measurements f , f̃ = g.f are related by a symmetry g of the domain, like a rigid motion on an observed molecular compound, we would like our processed data M(f) and M(f̃) to be related by the same symmetry — thus that M(g.f) = g.M(f) or equivalently that M commutes with the symmetry transformation of the domain. The study of operators that satisfy such symmetry constraints has played a long and central role in the history of physics and mathematics, motivated by the inherent symmetries of physical laws. More recently, such importance has also extended to the design of machine learning systems, where symmetries improve the sample complexity [25, 3]. For instance, Convolutional Neural Networks build translation symmetry, whereas Graph Neural Networks build permutation symmetry, amongst other examples coined under the ‘Geometric Deep Learning’ umbrella [5, 4]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Lie groups of transformations are of particular interest, because there exists a precise and systematic framework to build such intrinsic operators. Indeed, for a locally compact group G, it is possible to define a Haar measure which is invariant to the action of G [2]; then a simple filtering along the orbit of G allows to define a class of linear operators that commute with the group action. Examples of locally compact groups are given by specific Lie groups acting on Rd, such as the translations or the rotations Od(R). Often these Lie groups G only act on a manifold M, and one tries to average along the orbit induced by G. Note that it is possible, beyond invariance, to linearize more complex groups of variability like diffeomorphisms Diff(M) [7]. While the description of such linear intrinsic structures is of central mathematical importance and forms the basis of Representation theory [30], in itself is not sufficient to bear fruit in the context of Representation learning using Neural Networks [12]. Indeed, linear operators do not have the capacity to extract rich information needed to solve challenging high-dimensional learning problems. It is therefore necessary to extend the systematic construction and classification of intrinsic operators to the non-linear case. With that purpose in mind, our work aims at studying the class of (non-linear) operators M which commute with the action of the group Diff(M), the diffeomorphisms over M. This approach will lead to a natural class of non-linear intrinsic operators. Indeed, any group G of symmetries is, by definition, a subgroup of Diff(M), and thus commutes with such M [24]. Consequently, obtaining a non-linear invariant to a symmetry groupG could be done by using a cascade of interlacing non-linear operators which commute with Diff(M) and linear operators which commute with G. A notable example of linear operators that are covariant to the Lie group of translations is a given by the convolutions along the orbit of the group. These can be constructed thanks to the canonical Haar measure [32]. However, such an approach fails for infinite dimensional groups, like our object of interest: contrary to Lie groups, Diff(M) is not locally compact and it is thus not possible to define a Haar measure on this group. Our first contribution is to demonstrate that the non-linear operators which act on vector fields (elements of Lpω(M, TM)) and which commute with the group of diffeomorphisms, are actually just scalar multiplications. This implies that Diff(M) is too rich to obtain non-trivial operators. Our second contribution is to demonstrate that non-linear operators acting on signals in Lpω(M,R) are pointwise non-linearities. This fills a gap in the results of [7], and a fortiori justifies the use of point-wise non-linearities in geometric Deep Learning [4]. Let us remark that the study of equivariant operators that take as input vector fields is motivated by the use of Neural Networks in physics, in particular for dynamical systems such as fluid dynamics [8]. For example, one subject of interest in hydrodynamics is how a vector field of velocities evolves; the time evolution of such field is described by a partial differential equation (PDE), the Navier-Stokes equations, in which Neural Networks found recent applications and it is more generally the case of other PDE [31]. Our paper is structured as follows: Sec. 2 introduces the necessary formalism, that we use through this paper: in particular, we formally define the action of diffeomorphism. Then, we state and discuss our theorems in Sec. 3.1 and sketch their proofs in Sec. 3.2. Rigorous proofs of each statement can be found in the Appendix. 2 Problem Setup 2.1 Related work and motivation In this section, we discuss the notion of intrinsic operators, invariant and covariant non-linear operators and linear representation over standard symetry groups. Then, we formally state our objective. Intrinsic Operators As discussed above, in this work we are interested in intrinsic operators M : Lp(M, E) → Lp(M, E), where M is a Riemannian manifold, and E = R or E = TM, capturing respectively the setting of scalar signals and vector fields over M. Lp(M,R) is the space of scalar function f : M → R which p-th power is integrable, similarly Lp(M, TM) is the space of sections of the tangent bundle of M (denoted TM), f : M → TM, which norm ∥f∥ : M → R is in Lp(M,R). Here the notion of ‘intrinsic’ means that M is consistent with an equivalence class induced by a symmetry group G in Lp(M, E): if f, f̃ ∈ Lp(M, E) are related by a transformation g ∈ G (in which case we write f = g.f̃ ), then M(f) = g.M(f̃). Naturally, a stronger equivalence class imposes a stronger requirement towards M , and consequently restrains the complexity of M . We now describe the plausible techniques used to design such operators M . GM-Convolutions The notion ofGM -convolutions [34] is an example of linear covariant operators which commute with the reparametrization of a manifold. In practice, this implies that the weights of a GM -convolution are shared and the action of GM -convolutions is local – two properties that facilitate implementation and point out the similarity with Lie groups. Another example of symmetry group corresponds to the isometry group of a Riemaniann manifold, whose pushforward preserves the tensor metric. In this case, it is well known that isometries [33] are the only diffeomomorphism which commute with a manifold Laplacian. Thus, any linear operators which commute with isometries is stabilized by Laplacian’s eigenspaces. However, little is known on the non-linear counterpart of the symmetry-covariant operators. In this work, we characterize non-linear operators which commute with Diff(M). We will see that such operators are intrinsically defined by Diff(M) and could be combined with any linear operators covariant with a symmetry group G. Non-linear operators It has been shown that Convolutional Neural Networks are dense in the set of non-linear covariant operators [35]. The recipe of the corresponding proof is an extension of the proof of the universal approximation theorem [14]. The Scattering Transform [6, 23] is also an example of a well-understood non-linear operator which corresponds to a cascade of complex wavelet transforms followed by a point-wise modulus non-linearity. This representation provably linearizes small deformations. Compact Lie Groups In the context of geometric Machine Learning [5], there are several relevant notions of equivalence. For instance, we can consider a compact Lie Group G acting on M, and an associated representation in F = {f : M → R}: Given g ∈ G and f ∈ F , then g.f(x) ≜ f(g−1.x) for x ∈ M. We then consider f ∼ f̃ , related by this group action: f̃ = g.f for some g ∈ G. The operators M which are compatible with such group action are referred as being G-equivariant (or covariant to the action of G) in the ML literature [13, 4]. Such groups are typically of finite and small dimension, e.g. the Euclidean transformations of M = Rd, with d = 2 for computer vision applications, or d = 3 for computational biology/chemistry applications. In this case, it is possible to characterize all linear intrinsic operators M as group convolutions [20], leading to a rich family of non-linear intrinsic operators by composing such group convolutions with element-wise non-linear operators, as implemented in modern Neural Networks. We highlight that stability to symetries via non-linear operators finds useful application, in particular for flat manifolds [7]. Isometries Riemanian manifolds M come with a default equivalence class, which is given by isometries. TuM denotes the tangent vector space of M at point u ∈ M. Ifmu : TuM×TuM → R denotes the Riemannian metric tensor at point u ∈ M, a diffeomorphism ψ : M → M is an isometry if gu(v, w) = gψ(u)(dψu(v), dψu(w)) for any u ∈ M and v, w ∈ TuM. In words, isometries are changes of variables that preserve the local distances in the domain. The ensemble of all isometries forms a Lie Group which is locally compact [27]. In this case, one can also build a rich class of intrinsic operators by following the previously explained ‘blueprint’, namely composing linear intrinsic operators with element-wise non-linearities. As a representative example, the LaplaceBeltrami operator of M only depends on intrinsic metric properties [33]: as said above, isometries preserve the invariant subspaces of a Laplacian. Beyond Isometries While isometries are the ‘natural’ transformations of the geometric domain, they cannot express high-dimensional sources of variability; indeed, if M is a d-dimensional complete connected Riemannian manifold, its isometry group has dimension at most d(d+ 1)/2 [10]. This raises the question whether one can characterize intrinsic operators relative to a broader class of transformations. Another class of important symmetries corresponds to the ones which are gauge invariant, i.e. which leads to transformations which preserve the change of parametrization and which are used in [11, 34] through the notion of G-structure. In this work, we consider the class of transformations given by Diff(M), the diffeomorphisms over M. As shown in the Appendix, compactly supported deformations ψ : M → M define bounded linear operators Lψ acting on Lp(M, E) → Lp(M, E), and constitute a far broader class of transformations than isometries. Our proof is mainly based on the use of compactly supported diffeomorphisms. Our objective is to characterize the (non-linear) operators M such that ∀ϕ ∈ Diff(M), LϕM =MLϕ . In other words, we aim to understand continuous operators M that commute with deformations. We will show that such operators are act locally and that they can be descriped explicitly, with simple formula. The commutation condition is visualized in the following diagram: f Lϕ // M ⟲ g M Mf Lϕ // Mg 2.2 Notations We will now formally introduce the mathematical objects of interest in this document. Let (M, g) be an orientable, connected, Riemannian manifold, of finite dimension d ∈ N∗. Let TM denote the tangent bundle of M, i.e. the union of tangent spaces at points u ∈ M. T ∗M is the cotangent bundle of M. g ∈ Γ(T ∗M ⊗ T ∗M) is a section of symmetric definite positive bilinear forms on the tangent bundle of M . It is common to denote ΓB the collection of sections of a bundle B; ∧n T ∗M for n ≤ d is the bundle of n-linear alternated forms of M, and Γ( ∧n T ∗M) is the space of section of this vector bundle over M. For A ⊆ M, we denote A its closure; 1A is the indicator function of A, i.e. which takes value 1 if x ∈ A and 0 otherwise. B(u, r) denotes the ball of radius r around u ∈ M. Any two vectors v, v1 ∈ V in a pre-Hilbert space (with a scalar product ⟨, ⟩) are orthogonal, denoted v ⊥ v1, when ⟨v, v1⟩ = 0. Fix p ∈ [1,+∞[. Any volume form ω ∈ Γ( ∧d T ∗M) defines a (positive) measure on the orientable Riemannian manifold M; the total volume of M is ω(M) := ∫ M 1dω. Let us define L p ω(M, TM), the space of Lp vector fields, defined as the subspace of measurable functions f : M → TM such that f(u) ∈ TuM almost everywhere and ∥f∥pp ≜ ∫ u∈M gu(f(u), f(u)) p 2 dω(x) < +∞ . (1) We will also consider Lpω(M,R) the space of measurable scalar functions (fields) f : M → R that fulfill ∥f∥pp ≜ ∫ u∈M |f(u)|p dω(u) < +∞ . (2) We may write ∥ · ∥ instead of ∥ · ∥p when there is no ambiguity. For a C∞ diffeomorphism ϕ ∈ Diff(M), we will consider the action of Lϕ : Lpω(M, TM) → Lpω(M, TM) which we define for for any f ∈ Lpω(M,R) as Lϕf(u) ≜ dϕ(u) −1.f(ϕ(u)) . Note that this action is contravariant: Lψ◦ϕf(u) = d(ψ ◦ ϕ)−1.f(ψ ◦ ϕ(u)) = LϕLψf(u) For scalar function f ∈ Lpω(M,R), we define the action of ϕ via Lϕf(u) ≜ f(ϕ(u)) . Let A be a measurable set of M and f ∈ Lp(M, E), f1A is the product of f with 1A, i.e. f1A is equal to f on A and 0 elsewhere. In what follows we introduce ’constant’ fields over an open set, they are denoted c1U with U an open subset of M. For scalar fields, a ’constant’ scalar field f(u) is equal to the same constant c ∈ R for any u ∈ U . On the other hand, ’constant’ vector fields f1U are vector fields over U for which there is a chart from U to an open subset of Rd, in which for any u ∈ U f(u) is equal to a constant vector c ∈ Rd; in the vector case we say that the vector field f1U can be straightened. This latter operator is also contravariant. If there is no ambiguity, we will use the same notation Lϕ, whether we apply it to Lpω(M,R) or Lpω(M, TM). We might sometimes refer to Lpω(M,R) or Lpω(M, TM) as Lp(M,R) or Lp(M, TM). Throughout the article we restrict ourselves to ϕ such that Lϕ is a bounded operator. Write supp(ϕ) = {u, ϕ(u) ̸= u} for the support of ϕ and say that ϕ has a compact support if supp(ϕ) is compact. We denote by Diffc(M) ⊂ Diff(M) the set of compactly supported diffeomorphisms. Recall that since M is second-countable, C∞c (M) is dense in Lpω(M,R) and C∞c (M, TM) is dense in Lpω(M, TM). Finally, denote by Od(R) the set of unitary operators on Rd. Throughout the article, we might not write explicitly that equalities hold almost everywhere, since this is the default in Lp spaces. As mentioned earlier, compactly supported diffeomorphisms lead to continuous operators, which is made rigorous by the following lemma whose proof is in the appendix. Lemma 1. If supp(ϕ) is compact, then Lϕ is bounded. 3 Main theorems In this section we present our main results. We first show that any (non-linear) deformationequivariant operator acting on scalar fields must be point-wise (Theorem 1), and then establish that any deformation-equivariant operator acting on vector fields corresponds to a multiplication by a scalar (Theorem 2). 3.1 Theorem statements Now, we are ready to state our two main theorems: Theorem 1 (Scalar case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a Lipschitz continuous operator M : Lpω(M,R) → Lpω(M,R), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a Lipschitz continuous function ρ : R → R that fulfills M [f ](m) = ρ(f(m)) a.e. In that case, we have ρ(0) = 0 if ω(M) = ∞. Theorem 2 (Vector case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a continuous operator M : Lpω(M, TM) → Lpω(M, TM), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a scalar λ ∈ R such that ∀f ∈ Lpω(M, TM) : M [f ](m) = λf(m) a.e. We highlight that our theorems are quite generic in the sense that they apply to the manifolds usually used in applications or theory, Rd in particular. Remark 1. The scalar case allows to recover standard operators which are exploited for Deep Neural Networks architectures. However, Theorem 2 indicates that the group of diffeomorphism is too rich to obtain non-trivial non-linear operators. Remark 2. The case p = ∞ leads to different results. For instance, in the scalar case we may consider the operator Mf(x) = supy |f(y)| which fulfills LϕMf =MLϕf but is not pointwise. Remark 3. The condition “ω(M) = ∞ =⇒ ρ(0) = 0” in Theorem 1 is necessary, since in the case M = R, the operator Mf(x) ≜ eif(x) is not in Lpω(M,R). Remark 4. The Lipschitz condition in Theorem 1 is crucial, otherwise, Mf(x) = ρ(f(x)) might not be an operator of Lpω(M,R). For instance, if p = 2, M = [0, 1] and Mf(x) = √ f(x), we see that in this case, let f(x) = x, then f ∈ Lpω(M,R) and Mf ̸∈ Lpω(M,R) Remark 5. IfM is not Lipschitz, we can find an example which is not even continuous. The following example holds in both cases, the scalar case and the vector case. In both cases f ∈ Lp(M,R), the only thing that changes is the action of Lϕ on f . M = R, let for all f ∈ Lp(M,R): Mf(x) = 1{z,limy→z f(y)=f(z)}(x)f(x). It is a measurable function. Let us show that this M is a counterexample to the vector case: for any ϕ ∈ Diff(M) and x ∈ R, one has MLϕf(x) = 1{z,limy→z f(ϕ(y))=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (3) = 1{z,limy→ϕ(z) f(y)=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (4) = 1{z,limy→z f(y)=f(z)}(ϕ(x)) dϕ(x) −1f(ϕ(x)) (5) = LϕMf(x) . (6) However, M is not continuous as changing any function to 0 on Q does not change its norm but changes the set where the limits exists. More precisely let c > 0 be a strictly positive scalar,M [c] = c; let f = c1[x /∈ Q], M [f ] = 0 as {z,∃ limy→z f(ϕ(y))} = ∅. However c = f almost everywhere but M [c] ̸=M [f ] therefore M is not continuous. 3.2 Proof Sketch We now describe the main ideas for proving the Theorems 1 and 2. The appendix contains complete formal arguments and technical lemmata which we omit here due to lack of space. The two proofs share quite some similarities despite substantially different final results. Three ideas guide our proofs: First, we prove that it is possible to localize M on a certain class of open sets which behaves nicely with the manifold structure, the strongly convex sets which we denote as O1. This is closely related to the notion of pre-sheaf [15]. Secondly, we characterize M on small open-sets. In the scalar case, we will study the representation of locally constant functions. In the vector case, we will show that locally, the image M(1Uc) of a vector field c is co-linear to c provided that U is small enough. We will also show that those local properties are independent of the position on the manifold M via a connectedness argument. Thirdly and finally, we combine a compacity and a density argument to extend this characterization to M, which is developed in Sec. 3.3. Throughout the presentation, we will use the following definitions and theorems obtained from other works: Definition 1 (Strong convexity, from [18]). Let O1 be the collection of open sets which are bounded and strongly convex, i.e. such that any points p, q in such a set can be joined by a geodesic contained in the set. Furthermore let Ȯ1 = {A ∈ O1 : ∃B ∈ O1, Ā ⊂ B and ω(Ā\A) = 0}. The intuition behind the definition of Ȯ1 is that all of its elements are contained in a ‘security’ open set,which avoids degenerated effects on the manifold. In particular, this allows to control the boundary of a given open set. Theorem 3 (theorem adapted from [17, 18]). (1) Ȯ1 is a system of neighborhoods. (2) Any element of O1 is diffeomorph to Rd. (3) Both O1 and Ȯ1 are stable by intersection. Theorem 4 (Flowbox theorem, as stated in [9]). Let f, g ∈ C∞c (M, TM). For any m ∈ M with f(m) ̸= 0 and g(m) ̸= 0, there exists an open set U ⊂ M and ϕ ∈ Diff(M) such that ϕ(m) = m and Lϕ(1Uf) = 1ϕ(U)g. We will now present some lemmata that are necessary for the proofs of theorems 1 and 2. As a first step, we argue that one may assume M(0) = 0 where 0 denotes the constant 0-function. This is because in the appendix we show that M(0) is a constant function C, with C = 0 if ω(M) = ∞. Therefore, we may substract C from ρ and λ, leaving us with having to show the theorems only for M(0) = 0. Next, a key idea of the proof is to exploit the flexibility of the deformation equivariance to localise the input, i.e. to show that the image of compactly supported functions is also compactly supported. To do so, the following lemma provides a way of collapsing an open ball into a singleton while maintaining a good control on the support of the diffeomorphism. Lemma 2 (Key lemma). Let ϵ > 0. There exists a sequence of diffeomorphisms ϕn : Rd → Rd, compactly supported in B(0, 1 + ϵ) such that: ϕn(B(0, 1)) = B(0, 1 n ) , and sup u∈B(0,1) ∥dϕn(u)∥ ≤ 1 n . Proof. Set ϕn(u) = fn(∥u∥)u, where fn(r) = { 1 n , if |r| ≤ 1 1 , if |r| ≥ 1 + ϵ , and fn is smoothly interpolated for |r| ∈ [1, 1 + ϵ] in a way that it remains nondecreasing. It is then clear that ϕn fulfills the desired properties. We will often use that if the support of ϕ ∈ Diff(M) is such that supp(ϕ) ∩ U = ∅, then for any f ∈ Lpω(M,R) one has 1Uf = Lϕ(1Uf). This implies the following important lemma, for which a rigorous proof can be found in the appendix: Lemma 3. Let U ∈ Ȯ1 and M as in Theorem 1 or Theorem 2. Then, for any f ∈ E, where E = Lpω(M,R) or E = Lpω(M, TM) respectively, we have: M [f1U ] = 1UM [f ] . Furthermore, if U is any closed set, the same conclusion applies. Equipped with this result, our proof will characterize the image of functions of the type c1U where either c ∈ R, or c is a vector field which can be straightened (isomorphic to a constant vector), via the following Lemma. In the Vector case: Lemma 4 (Image of localized vector field). For M as in Theorem 2 there is U ∈ Ȯ1, and λ(U) such that for any f ∈ Lpω(M,TM): M [f1U ] = 1Uλ(U)f . (7) Here is the scalar case: Lemma 5 (Image of constant functions, scalar case). Let M as in Theorem 1. For any U ∈ Ȯ1 and c ∈ R, then: M(c1U ) = h(c, U)1U . Furthermore, c→ h(c, U) is Lipschitz for any U ∈ Ȯ1. At this stage, we note that both representations are point-wise, and the next steps of the proofs will be identical both for the scalar and vector cases. The extension to Lpω(M,R) or Lpω(M, TM) will be done thanks to: Lemma 6 (Image of a disjoint union of opensets). Let U1, ..., Un ∈ O1 and M as in Theorem 2 or Theorem 1, s.t. ∀i ̸= j, Ui ∩ Uj = ∅. Then for any f ∈ Lpω(M, TM): M [ n∑ i=1 1Uif ] = n∑ i=1 M [1Uif ] . This lemma states that we can completely characterize M on disjoint union of simple sets. We will then need an argument similar to Vitali covering Lemma in order to "glue" those open sets together, which shows that simple functions with disjoint support can approximate any elements of Lpω(M,R) or Lpω(M, TM) (we only state the lemma for Lpω(M,R) as our proof on Lpω(M, TM) does not necessarily need this result): Lemma 7 (Local Vitali). For f ∈ C∞c (M) and m ∈ M, there exists U ∈ Ȯ1 with m ∈ U , such that for any ϵ > 0, there exist subsets U1, ..., Un ∈ Ȯ1 with Ui ⊂ U and numbers c1, ..., cn ∈ R such that: ∥ ∑ n 1Uncn − 1Uf∥ < ϵ . Note that this type of covering is not possible on any open set without further assumptions on the manifold, such as bounds on its Ricci curvature [22]. Fortunately, we will only need a local version which is true because charts are locally bi-Lipschitz. Both Lemma 6 and Lemma 7 imply that: Proposition 1. Consider M from either Theorem 1 or 2. Assume that there exists U ∈ Ȯ1 such that M(c1V ) = h(c, V )1V for any V ⊂ U , with V ∈ Ȯ1, where c is either a vector field in the case E = Lpω(M, TM) or a constant scalar in the case E = Lpω(M,R). If we further assume that c→ h(c, U) is L-Lipschitz, then ∀f ∈ E,∀m ∈ M,M [1Uf ](m) = 1Uh(f(m), U) . Furthermore, it does not depend on U , meaning that for any other such Ũ , we have: ∀f ∈ E,∀m ∈ U ∩ Ũ ,M [1Ũf ](m) = 1Uh(f(m), U) . We briefly discuss the intuition behind Theorem 2. It is linked to the idea that the operators M at hand have to commute with local rotations, and this even for locally constant vector fields. We reduce the characterisation of deformation-equivariant vector operators using an invariance to symmetry argument: functions which are invariant to rotations are multiples of a scalar. The intuition is contained in the following lemma, which is commonly used in physics: Lemma 8 (Invariance to rotation). Let f : Rd → Rd such that for any W ∈ Od(R) and x ∈ Rd, one has f(Wx) =Wf(x). Then, there is λ : Rd → R, f(x) = λ(∥x∥)x. Proof. We write f(x) = λ(x)x+x⊥, with x⊥(m) ̸= 0 and x⊥ ⊥ x. Then, we introduceW ∈ Od(R) such that Wx⊥(m) = −x⊥(m) and Wx(m) = x(m). From f(x) = f(Wx) =Wf(x) we deduce that x⊥ = 0. Next, λ(Wx) = λ(x) thus λ(x) = λ(x′) for any ∥x∥ = ∥x′∥. Distinction between scalar and vector case The scalar case is simpler to handle than the vector case: there are several more steps for the proof of Theorem 2, one needs to show that the point-wise non-linearity is actually a scalar multiplication. We also highlight that the non-linearity is fully defined by its image on locally constant functions. Finally, we conclude the proof of the theorem by appealing to a common density argument of the functions smooth with compact support, combing all the lemmata we have just presented in Sec. 3.3. 3.3 Proofs conclusions (common to the scalar and vector case) In this section, we prove that the local properties of M can be extended globally on M. The main idea is to exploit the well-known Poincaré’s formula, which states that: 1∪iUi = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik , and to localize the action of M on each Ui1 ∩ Ui2 ∩ ... ∩ Uik ∈ Ȯ1 thanks to Lemma 3. Proof of Theorem 1 and Theorem 2. Let f be a smooth and compactly supported function. Further consider ∪i≤nUi a finite covering of its support with Ui ∈ Ȯ1. Using an inclusion-exclusion formula together with Lemma 3, we obtain 1∪iUiM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩UikM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ] , where we used that Ui1 ∩Ui2 ∩ ...∩Uik ∈ Ȯ1. Now, the support of f is closed and included in ∪iUi. Thus using Lemma 3: M [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ], Note that if ρ is a pointwise operator with ρ(0) = 0, then ρ(1Uf) = 1Uρ(f) and n∑ k=1 (−1)k ∑ i1<...<ik ρ(f1Ui1∩Ui2∩...∩Uik ) = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik ρ(f) (8) = 1∪iUiρ(f) = ρ(f) . (9) Thus, Mf = ρ(f) where ρ is obtained from Lemma 4 or 5 combined with Prop 1. We conclude by density in Lpω(M,R) or Lpω(M, TM) respectively. This ends the proof. 4 Remarks and conclusion In this work, we have fully characterized non-linear operators which commute under the action of smooth deformations. In some sense, it settles the intuitive fact that commutation with the whole diffeomorphism group is too strong a property, leading to a small, nearly trivial family of non-linear intrinsic operators. While on their own they have limited interest for geometric deep representation learning, they can ‘upgrade’ any family of linear operators associated with any group G ⊂ Diff(M) into a powerful non-linear class — the so-called GDL Blueprint in [4]. Also, this result is a first step towards characterizing the non-linear operators which commute with Gauge transformations and could give useful insights for specifying novel Gauge invariant architectures. We now state a couple of unsolved questions and future work. On the commutativity assumption: Several examples and approximation results [21][35] exist for operators that commute with Lie groups and discrete groups [19]. In this case, it is possible to define a measure on the group that is invariant by the group action (called the Haar measure), which makes it possible to define convolutions. Roughly, non-linear operators covariant with some actions of those groups can be thought of as an approximation by a Group Convolution Neural Networks. It is important to note that the inputs of the operators described in these articles are functions that take real values; the much more general class of inputs that take values in vector bundles is, to our knowledge, not covered in the literature. To our knowledge, we are the first work to study the design of equivariant Neural Networks that process vector fields defined over a manifold. In this setting even for M = Rd, it is unclear which type of non-linear operators commute with smaller groups of symmetry such as the Euclidean group. In fact, a generic question holds for manifolds: for a given symmetry group G, what is elementary non-linear building block of a Neural Network? This could be, for instance, useful to design Neural Networks which are Gauge invariant. It is an open question for future work which would be relevant many applications in physics [16]. Furthermore, the fact that the characterization of diffeomorphism invariant operators we exhibited in this paper is very restrictive opens the way for the study of other non-locally ’smaller’ compact groups; we believe that any results in that direction are completely novel. Example of vector operators for L∞ It is slightly unclear how the vector case p = ∞ can be handled in our framework, yet [1] seems to have interesting insights toward this direction. Linearization of Diff(M) In this work, we considered an exact commutation between operators and a symmetries: however, it is unclear which operators approximatively commute with a given symmetry group. Such operators would be better to linearize a high-dimensional symmetry group like Diff(M). An important instance of non-linear operators that are non-local and that ‘nearly’ commute with diffeomorphisms is the Wavelet Scattering representation [23, 7, 28]. Acknowledgments and Disclosure of Funding EO was supported by the Project ANR-21-CE23-0030 ADONIS and EMERG-ADONIS from Alliance SU. GSP was also supported by France Relance and Median Technologies; he would like to thank very much NeurIPS Foundation for their financial support (NeurIPS 2022 Scholar Award).
1. What is the focus and contribution of the paper regarding operators on a manifold? 2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis? 3. Do you have any concerns or questions regarding the paper, especially when reducing symmetries? 4. Are there any limitations regarding the study of operators on a manifold?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper theoretically studies the operators on a manifold M that commute with the group of diffeomorphisms Diff ( M ) . Specifically, the author proves that in the case of a scalar fields, only point-wise non-linearities are the corresponding operators; and in the case of vector fields, those operators corresponds to the scalar multiplication. These result indicates that there is no universal class of non-linear operators (e.g., neural networks) that commute with any group of symmetries of M Strengths And Weaknesses Strengths: This paper, especially the theory and its corresponding proofs, is well-written and easy to follow. The proved theory appears to be novel and solid, especially the result in the context of neural networks. I think this is a very nice contribution to the geometric deep learning theory. Weaknesses and comments: Overall, I don't have any criticism on the theories, my only question is that since one of the result is that there is no universal class of non-linear operators that commute with 'any' group of symmetries, I wonder what can we say about the operators if we reduce the symmetries to some specific symmetry group? In this case, would that be possible to find some particular non-linear operators that commute with such symmetries? I think it would be great if there is a discussion on this. Minor comments: The operator M (not the manifold M ) is mentioned in line 16 but it is defined later in line 64. Questions All the suggestions and questions are presented in the strength and weaknesses section. Limitations Yes
NIPS
Title On Non-Linear operators for Geometric Deep Learning Abstract This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on R. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. N/A This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lpω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on Rd. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lpω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. 1 Introduction Given a physical domain M and measurements f : M → Y observed over it, one is often interested in processing intrinsic information from f , i.e. consistent with the symmetries of the domain. Let M denote an operator, it can be seen as a non-linear operator acting on measurements. In words, if two measurements f , f̃ = g.f are related by a symmetry g of the domain, like a rigid motion on an observed molecular compound, we would like our processed data M(f) and M(f̃) to be related by the same symmetry — thus that M(g.f) = g.M(f) or equivalently that M commutes with the symmetry transformation of the domain. The study of operators that satisfy such symmetry constraints has played a long and central role in the history of physics and mathematics, motivated by the inherent symmetries of physical laws. More recently, such importance has also extended to the design of machine learning systems, where symmetries improve the sample complexity [25, 3]. For instance, Convolutional Neural Networks build translation symmetry, whereas Graph Neural Networks build permutation symmetry, amongst other examples coined under the ‘Geometric Deep Learning’ umbrella [5, 4]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Lie groups of transformations are of particular interest, because there exists a precise and systematic framework to build such intrinsic operators. Indeed, for a locally compact group G, it is possible to define a Haar measure which is invariant to the action of G [2]; then a simple filtering along the orbit of G allows to define a class of linear operators that commute with the group action. Examples of locally compact groups are given by specific Lie groups acting on Rd, such as the translations or the rotations Od(R). Often these Lie groups G only act on a manifold M, and one tries to average along the orbit induced by G. Note that it is possible, beyond invariance, to linearize more complex groups of variability like diffeomorphisms Diff(M) [7]. While the description of such linear intrinsic structures is of central mathematical importance and forms the basis of Representation theory [30], in itself is not sufficient to bear fruit in the context of Representation learning using Neural Networks [12]. Indeed, linear operators do not have the capacity to extract rich information needed to solve challenging high-dimensional learning problems. It is therefore necessary to extend the systematic construction and classification of intrinsic operators to the non-linear case. With that purpose in mind, our work aims at studying the class of (non-linear) operators M which commute with the action of the group Diff(M), the diffeomorphisms over M. This approach will lead to a natural class of non-linear intrinsic operators. Indeed, any group G of symmetries is, by definition, a subgroup of Diff(M), and thus commutes with such M [24]. Consequently, obtaining a non-linear invariant to a symmetry groupG could be done by using a cascade of interlacing non-linear operators which commute with Diff(M) and linear operators which commute with G. A notable example of linear operators that are covariant to the Lie group of translations is a given by the convolutions along the orbit of the group. These can be constructed thanks to the canonical Haar measure [32]. However, such an approach fails for infinite dimensional groups, like our object of interest: contrary to Lie groups, Diff(M) is not locally compact and it is thus not possible to define a Haar measure on this group. Our first contribution is to demonstrate that the non-linear operators which act on vector fields (elements of Lpω(M, TM)) and which commute with the group of diffeomorphisms, are actually just scalar multiplications. This implies that Diff(M) is too rich to obtain non-trivial operators. Our second contribution is to demonstrate that non-linear operators acting on signals in Lpω(M,R) are pointwise non-linearities. This fills a gap in the results of [7], and a fortiori justifies the use of point-wise non-linearities in geometric Deep Learning [4]. Let us remark that the study of equivariant operators that take as input vector fields is motivated by the use of Neural Networks in physics, in particular for dynamical systems such as fluid dynamics [8]. For example, one subject of interest in hydrodynamics is how a vector field of velocities evolves; the time evolution of such field is described by a partial differential equation (PDE), the Navier-Stokes equations, in which Neural Networks found recent applications and it is more generally the case of other PDE [31]. Our paper is structured as follows: Sec. 2 introduces the necessary formalism, that we use through this paper: in particular, we formally define the action of diffeomorphism. Then, we state and discuss our theorems in Sec. 3.1 and sketch their proofs in Sec. 3.2. Rigorous proofs of each statement can be found in the Appendix. 2 Problem Setup 2.1 Related work and motivation In this section, we discuss the notion of intrinsic operators, invariant and covariant non-linear operators and linear representation over standard symetry groups. Then, we formally state our objective. Intrinsic Operators As discussed above, in this work we are interested in intrinsic operators M : Lp(M, E) → Lp(M, E), where M is a Riemannian manifold, and E = R or E = TM, capturing respectively the setting of scalar signals and vector fields over M. Lp(M,R) is the space of scalar function f : M → R which p-th power is integrable, similarly Lp(M, TM) is the space of sections of the tangent bundle of M (denoted TM), f : M → TM, which norm ∥f∥ : M → R is in Lp(M,R). Here the notion of ‘intrinsic’ means that M is consistent with an equivalence class induced by a symmetry group G in Lp(M, E): if f, f̃ ∈ Lp(M, E) are related by a transformation g ∈ G (in which case we write f = g.f̃ ), then M(f) = g.M(f̃). Naturally, a stronger equivalence class imposes a stronger requirement towards M , and consequently restrains the complexity of M . We now describe the plausible techniques used to design such operators M . GM-Convolutions The notion ofGM -convolutions [34] is an example of linear covariant operators which commute with the reparametrization of a manifold. In practice, this implies that the weights of a GM -convolution are shared and the action of GM -convolutions is local – two properties that facilitate implementation and point out the similarity with Lie groups. Another example of symmetry group corresponds to the isometry group of a Riemaniann manifold, whose pushforward preserves the tensor metric. In this case, it is well known that isometries [33] are the only diffeomomorphism which commute with a manifold Laplacian. Thus, any linear operators which commute with isometries is stabilized by Laplacian’s eigenspaces. However, little is known on the non-linear counterpart of the symmetry-covariant operators. In this work, we characterize non-linear operators which commute with Diff(M). We will see that such operators are intrinsically defined by Diff(M) and could be combined with any linear operators covariant with a symmetry group G. Non-linear operators It has been shown that Convolutional Neural Networks are dense in the set of non-linear covariant operators [35]. The recipe of the corresponding proof is an extension of the proof of the universal approximation theorem [14]. The Scattering Transform [6, 23] is also an example of a well-understood non-linear operator which corresponds to a cascade of complex wavelet transforms followed by a point-wise modulus non-linearity. This representation provably linearizes small deformations. Compact Lie Groups In the context of geometric Machine Learning [5], there are several relevant notions of equivalence. For instance, we can consider a compact Lie Group G acting on M, and an associated representation in F = {f : M → R}: Given g ∈ G and f ∈ F , then g.f(x) ≜ f(g−1.x) for x ∈ M. We then consider f ∼ f̃ , related by this group action: f̃ = g.f for some g ∈ G. The operators M which are compatible with such group action are referred as being G-equivariant (or covariant to the action of G) in the ML literature [13, 4]. Such groups are typically of finite and small dimension, e.g. the Euclidean transformations of M = Rd, with d = 2 for computer vision applications, or d = 3 for computational biology/chemistry applications. In this case, it is possible to characterize all linear intrinsic operators M as group convolutions [20], leading to a rich family of non-linear intrinsic operators by composing such group convolutions with element-wise non-linear operators, as implemented in modern Neural Networks. We highlight that stability to symetries via non-linear operators finds useful application, in particular for flat manifolds [7]. Isometries Riemanian manifolds M come with a default equivalence class, which is given by isometries. TuM denotes the tangent vector space of M at point u ∈ M. Ifmu : TuM×TuM → R denotes the Riemannian metric tensor at point u ∈ M, a diffeomorphism ψ : M → M is an isometry if gu(v, w) = gψ(u)(dψu(v), dψu(w)) for any u ∈ M and v, w ∈ TuM. In words, isometries are changes of variables that preserve the local distances in the domain. The ensemble of all isometries forms a Lie Group which is locally compact [27]. In this case, one can also build a rich class of intrinsic operators by following the previously explained ‘blueprint’, namely composing linear intrinsic operators with element-wise non-linearities. As a representative example, the LaplaceBeltrami operator of M only depends on intrinsic metric properties [33]: as said above, isometries preserve the invariant subspaces of a Laplacian. Beyond Isometries While isometries are the ‘natural’ transformations of the geometric domain, they cannot express high-dimensional sources of variability; indeed, if M is a d-dimensional complete connected Riemannian manifold, its isometry group has dimension at most d(d+ 1)/2 [10]. This raises the question whether one can characterize intrinsic operators relative to a broader class of transformations. Another class of important symmetries corresponds to the ones which are gauge invariant, i.e. which leads to transformations which preserve the change of parametrization and which are used in [11, 34] through the notion of G-structure. In this work, we consider the class of transformations given by Diff(M), the diffeomorphisms over M. As shown in the Appendix, compactly supported deformations ψ : M → M define bounded linear operators Lψ acting on Lp(M, E) → Lp(M, E), and constitute a far broader class of transformations than isometries. Our proof is mainly based on the use of compactly supported diffeomorphisms. Our objective is to characterize the (non-linear) operators M such that ∀ϕ ∈ Diff(M), LϕM =MLϕ . In other words, we aim to understand continuous operators M that commute with deformations. We will show that such operators are act locally and that they can be descriped explicitly, with simple formula. The commutation condition is visualized in the following diagram: f Lϕ // M ⟲ g M Mf Lϕ // Mg 2.2 Notations We will now formally introduce the mathematical objects of interest in this document. Let (M, g) be an orientable, connected, Riemannian manifold, of finite dimension d ∈ N∗. Let TM denote the tangent bundle of M, i.e. the union of tangent spaces at points u ∈ M. T ∗M is the cotangent bundle of M. g ∈ Γ(T ∗M ⊗ T ∗M) is a section of symmetric definite positive bilinear forms on the tangent bundle of M . It is common to denote ΓB the collection of sections of a bundle B; ∧n T ∗M for n ≤ d is the bundle of n-linear alternated forms of M, and Γ( ∧n T ∗M) is the space of section of this vector bundle over M. For A ⊆ M, we denote A its closure; 1A is the indicator function of A, i.e. which takes value 1 if x ∈ A and 0 otherwise. B(u, r) denotes the ball of radius r around u ∈ M. Any two vectors v, v1 ∈ V in a pre-Hilbert space (with a scalar product ⟨, ⟩) are orthogonal, denoted v ⊥ v1, when ⟨v, v1⟩ = 0. Fix p ∈ [1,+∞[. Any volume form ω ∈ Γ( ∧d T ∗M) defines a (positive) measure on the orientable Riemannian manifold M; the total volume of M is ω(M) := ∫ M 1dω. Let us define L p ω(M, TM), the space of Lp vector fields, defined as the subspace of measurable functions f : M → TM such that f(u) ∈ TuM almost everywhere and ∥f∥pp ≜ ∫ u∈M gu(f(u), f(u)) p 2 dω(x) < +∞ . (1) We will also consider Lpω(M,R) the space of measurable scalar functions (fields) f : M → R that fulfill ∥f∥pp ≜ ∫ u∈M |f(u)|p dω(u) < +∞ . (2) We may write ∥ · ∥ instead of ∥ · ∥p when there is no ambiguity. For a C∞ diffeomorphism ϕ ∈ Diff(M), we will consider the action of Lϕ : Lpω(M, TM) → Lpω(M, TM) which we define for for any f ∈ Lpω(M,R) as Lϕf(u) ≜ dϕ(u) −1.f(ϕ(u)) . Note that this action is contravariant: Lψ◦ϕf(u) = d(ψ ◦ ϕ)−1.f(ψ ◦ ϕ(u)) = LϕLψf(u) For scalar function f ∈ Lpω(M,R), we define the action of ϕ via Lϕf(u) ≜ f(ϕ(u)) . Let A be a measurable set of M and f ∈ Lp(M, E), f1A is the product of f with 1A, i.e. f1A is equal to f on A and 0 elsewhere. In what follows we introduce ’constant’ fields over an open set, they are denoted c1U with U an open subset of M. For scalar fields, a ’constant’ scalar field f(u) is equal to the same constant c ∈ R for any u ∈ U . On the other hand, ’constant’ vector fields f1U are vector fields over U for which there is a chart from U to an open subset of Rd, in which for any u ∈ U f(u) is equal to a constant vector c ∈ Rd; in the vector case we say that the vector field f1U can be straightened. This latter operator is also contravariant. If there is no ambiguity, we will use the same notation Lϕ, whether we apply it to Lpω(M,R) or Lpω(M, TM). We might sometimes refer to Lpω(M,R) or Lpω(M, TM) as Lp(M,R) or Lp(M, TM). Throughout the article we restrict ourselves to ϕ such that Lϕ is a bounded operator. Write supp(ϕ) = {u, ϕ(u) ̸= u} for the support of ϕ and say that ϕ has a compact support if supp(ϕ) is compact. We denote by Diffc(M) ⊂ Diff(M) the set of compactly supported diffeomorphisms. Recall that since M is second-countable, C∞c (M) is dense in Lpω(M,R) and C∞c (M, TM) is dense in Lpω(M, TM). Finally, denote by Od(R) the set of unitary operators on Rd. Throughout the article, we might not write explicitly that equalities hold almost everywhere, since this is the default in Lp spaces. As mentioned earlier, compactly supported diffeomorphisms lead to continuous operators, which is made rigorous by the following lemma whose proof is in the appendix. Lemma 1. If supp(ϕ) is compact, then Lϕ is bounded. 3 Main theorems In this section we present our main results. We first show that any (non-linear) deformationequivariant operator acting on scalar fields must be point-wise (Theorem 1), and then establish that any deformation-equivariant operator acting on vector fields corresponds to a multiplication by a scalar (Theorem 2). 3.1 Theorem statements Now, we are ready to state our two main theorems: Theorem 1 (Scalar case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a Lipschitz continuous operator M : Lpω(M,R) → Lpω(M,R), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a Lipschitz continuous function ρ : R → R that fulfills M [f ](m) = ρ(f(m)) a.e. In that case, we have ρ(0) = 0 if ω(M) = ∞. Theorem 2 (Vector case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a continuous operator M : Lpω(M, TM) → Lpω(M, TM), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a scalar λ ∈ R such that ∀f ∈ Lpω(M, TM) : M [f ](m) = λf(m) a.e. We highlight that our theorems are quite generic in the sense that they apply to the manifolds usually used in applications or theory, Rd in particular. Remark 1. The scalar case allows to recover standard operators which are exploited for Deep Neural Networks architectures. However, Theorem 2 indicates that the group of diffeomorphism is too rich to obtain non-trivial non-linear operators. Remark 2. The case p = ∞ leads to different results. For instance, in the scalar case we may consider the operator Mf(x) = supy |f(y)| which fulfills LϕMf =MLϕf but is not pointwise. Remark 3. The condition “ω(M) = ∞ =⇒ ρ(0) = 0” in Theorem 1 is necessary, since in the case M = R, the operator Mf(x) ≜ eif(x) is not in Lpω(M,R). Remark 4. The Lipschitz condition in Theorem 1 is crucial, otherwise, Mf(x) = ρ(f(x)) might not be an operator of Lpω(M,R). For instance, if p = 2, M = [0, 1] and Mf(x) = √ f(x), we see that in this case, let f(x) = x, then f ∈ Lpω(M,R) and Mf ̸∈ Lpω(M,R) Remark 5. IfM is not Lipschitz, we can find an example which is not even continuous. The following example holds in both cases, the scalar case and the vector case. In both cases f ∈ Lp(M,R), the only thing that changes is the action of Lϕ on f . M = R, let for all f ∈ Lp(M,R): Mf(x) = 1{z,limy→z f(y)=f(z)}(x)f(x). It is a measurable function. Let us show that this M is a counterexample to the vector case: for any ϕ ∈ Diff(M) and x ∈ R, one has MLϕf(x) = 1{z,limy→z f(ϕ(y))=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (3) = 1{z,limy→ϕ(z) f(y)=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (4) = 1{z,limy→z f(y)=f(z)}(ϕ(x)) dϕ(x) −1f(ϕ(x)) (5) = LϕMf(x) . (6) However, M is not continuous as changing any function to 0 on Q does not change its norm but changes the set where the limits exists. More precisely let c > 0 be a strictly positive scalar,M [c] = c; let f = c1[x /∈ Q], M [f ] = 0 as {z,∃ limy→z f(ϕ(y))} = ∅. However c = f almost everywhere but M [c] ̸=M [f ] therefore M is not continuous. 3.2 Proof Sketch We now describe the main ideas for proving the Theorems 1 and 2. The appendix contains complete formal arguments and technical lemmata which we omit here due to lack of space. The two proofs share quite some similarities despite substantially different final results. Three ideas guide our proofs: First, we prove that it is possible to localize M on a certain class of open sets which behaves nicely with the manifold structure, the strongly convex sets which we denote as O1. This is closely related to the notion of pre-sheaf [15]. Secondly, we characterize M on small open-sets. In the scalar case, we will study the representation of locally constant functions. In the vector case, we will show that locally, the image M(1Uc) of a vector field c is co-linear to c provided that U is small enough. We will also show that those local properties are independent of the position on the manifold M via a connectedness argument. Thirdly and finally, we combine a compacity and a density argument to extend this characterization to M, which is developed in Sec. 3.3. Throughout the presentation, we will use the following definitions and theorems obtained from other works: Definition 1 (Strong convexity, from [18]). Let O1 be the collection of open sets which are bounded and strongly convex, i.e. such that any points p, q in such a set can be joined by a geodesic contained in the set. Furthermore let Ȯ1 = {A ∈ O1 : ∃B ∈ O1, Ā ⊂ B and ω(Ā\A) = 0}. The intuition behind the definition of Ȯ1 is that all of its elements are contained in a ‘security’ open set,which avoids degenerated effects on the manifold. In particular, this allows to control the boundary of a given open set. Theorem 3 (theorem adapted from [17, 18]). (1) Ȯ1 is a system of neighborhoods. (2) Any element of O1 is diffeomorph to Rd. (3) Both O1 and Ȯ1 are stable by intersection. Theorem 4 (Flowbox theorem, as stated in [9]). Let f, g ∈ C∞c (M, TM). For any m ∈ M with f(m) ̸= 0 and g(m) ̸= 0, there exists an open set U ⊂ M and ϕ ∈ Diff(M) such that ϕ(m) = m and Lϕ(1Uf) = 1ϕ(U)g. We will now present some lemmata that are necessary for the proofs of theorems 1 and 2. As a first step, we argue that one may assume M(0) = 0 where 0 denotes the constant 0-function. This is because in the appendix we show that M(0) is a constant function C, with C = 0 if ω(M) = ∞. Therefore, we may substract C from ρ and λ, leaving us with having to show the theorems only for M(0) = 0. Next, a key idea of the proof is to exploit the flexibility of the deformation equivariance to localise the input, i.e. to show that the image of compactly supported functions is also compactly supported. To do so, the following lemma provides a way of collapsing an open ball into a singleton while maintaining a good control on the support of the diffeomorphism. Lemma 2 (Key lemma). Let ϵ > 0. There exists a sequence of diffeomorphisms ϕn : Rd → Rd, compactly supported in B(0, 1 + ϵ) such that: ϕn(B(0, 1)) = B(0, 1 n ) , and sup u∈B(0,1) ∥dϕn(u)∥ ≤ 1 n . Proof. Set ϕn(u) = fn(∥u∥)u, where fn(r) = { 1 n , if |r| ≤ 1 1 , if |r| ≥ 1 + ϵ , and fn is smoothly interpolated for |r| ∈ [1, 1 + ϵ] in a way that it remains nondecreasing. It is then clear that ϕn fulfills the desired properties. We will often use that if the support of ϕ ∈ Diff(M) is such that supp(ϕ) ∩ U = ∅, then for any f ∈ Lpω(M,R) one has 1Uf = Lϕ(1Uf). This implies the following important lemma, for which a rigorous proof can be found in the appendix: Lemma 3. Let U ∈ Ȯ1 and M as in Theorem 1 or Theorem 2. Then, for any f ∈ E, where E = Lpω(M,R) or E = Lpω(M, TM) respectively, we have: M [f1U ] = 1UM [f ] . Furthermore, if U is any closed set, the same conclusion applies. Equipped with this result, our proof will characterize the image of functions of the type c1U where either c ∈ R, or c is a vector field which can be straightened (isomorphic to a constant vector), via the following Lemma. In the Vector case: Lemma 4 (Image of localized vector field). For M as in Theorem 2 there is U ∈ Ȯ1, and λ(U) such that for any f ∈ Lpω(M,TM): M [f1U ] = 1Uλ(U)f . (7) Here is the scalar case: Lemma 5 (Image of constant functions, scalar case). Let M as in Theorem 1. For any U ∈ Ȯ1 and c ∈ R, then: M(c1U ) = h(c, U)1U . Furthermore, c→ h(c, U) is Lipschitz for any U ∈ Ȯ1. At this stage, we note that both representations are point-wise, and the next steps of the proofs will be identical both for the scalar and vector cases. The extension to Lpω(M,R) or Lpω(M, TM) will be done thanks to: Lemma 6 (Image of a disjoint union of opensets). Let U1, ..., Un ∈ O1 and M as in Theorem 2 or Theorem 1, s.t. ∀i ̸= j, Ui ∩ Uj = ∅. Then for any f ∈ Lpω(M, TM): M [ n∑ i=1 1Uif ] = n∑ i=1 M [1Uif ] . This lemma states that we can completely characterize M on disjoint union of simple sets. We will then need an argument similar to Vitali covering Lemma in order to "glue" those open sets together, which shows that simple functions with disjoint support can approximate any elements of Lpω(M,R) or Lpω(M, TM) (we only state the lemma for Lpω(M,R) as our proof on Lpω(M, TM) does not necessarily need this result): Lemma 7 (Local Vitali). For f ∈ C∞c (M) and m ∈ M, there exists U ∈ Ȯ1 with m ∈ U , such that for any ϵ > 0, there exist subsets U1, ..., Un ∈ Ȯ1 with Ui ⊂ U and numbers c1, ..., cn ∈ R such that: ∥ ∑ n 1Uncn − 1Uf∥ < ϵ . Note that this type of covering is not possible on any open set without further assumptions on the manifold, such as bounds on its Ricci curvature [22]. Fortunately, we will only need a local version which is true because charts are locally bi-Lipschitz. Both Lemma 6 and Lemma 7 imply that: Proposition 1. Consider M from either Theorem 1 or 2. Assume that there exists U ∈ Ȯ1 such that M(c1V ) = h(c, V )1V for any V ⊂ U , with V ∈ Ȯ1, where c is either a vector field in the case E = Lpω(M, TM) or a constant scalar in the case E = Lpω(M,R). If we further assume that c→ h(c, U) is L-Lipschitz, then ∀f ∈ E,∀m ∈ M,M [1Uf ](m) = 1Uh(f(m), U) . Furthermore, it does not depend on U , meaning that for any other such Ũ , we have: ∀f ∈ E,∀m ∈ U ∩ Ũ ,M [1Ũf ](m) = 1Uh(f(m), U) . We briefly discuss the intuition behind Theorem 2. It is linked to the idea that the operators M at hand have to commute with local rotations, and this even for locally constant vector fields. We reduce the characterisation of deformation-equivariant vector operators using an invariance to symmetry argument: functions which are invariant to rotations are multiples of a scalar. The intuition is contained in the following lemma, which is commonly used in physics: Lemma 8 (Invariance to rotation). Let f : Rd → Rd such that for any W ∈ Od(R) and x ∈ Rd, one has f(Wx) =Wf(x). Then, there is λ : Rd → R, f(x) = λ(∥x∥)x. Proof. We write f(x) = λ(x)x+x⊥, with x⊥(m) ̸= 0 and x⊥ ⊥ x. Then, we introduceW ∈ Od(R) such that Wx⊥(m) = −x⊥(m) and Wx(m) = x(m). From f(x) = f(Wx) =Wf(x) we deduce that x⊥ = 0. Next, λ(Wx) = λ(x) thus λ(x) = λ(x′) for any ∥x∥ = ∥x′∥. Distinction between scalar and vector case The scalar case is simpler to handle than the vector case: there are several more steps for the proof of Theorem 2, one needs to show that the point-wise non-linearity is actually a scalar multiplication. We also highlight that the non-linearity is fully defined by its image on locally constant functions. Finally, we conclude the proof of the theorem by appealing to a common density argument of the functions smooth with compact support, combing all the lemmata we have just presented in Sec. 3.3. 3.3 Proofs conclusions (common to the scalar and vector case) In this section, we prove that the local properties of M can be extended globally on M. The main idea is to exploit the well-known Poincaré’s formula, which states that: 1∪iUi = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik , and to localize the action of M on each Ui1 ∩ Ui2 ∩ ... ∩ Uik ∈ Ȯ1 thanks to Lemma 3. Proof of Theorem 1 and Theorem 2. Let f be a smooth and compactly supported function. Further consider ∪i≤nUi a finite covering of its support with Ui ∈ Ȯ1. Using an inclusion-exclusion formula together with Lemma 3, we obtain 1∪iUiM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩UikM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ] , where we used that Ui1 ∩Ui2 ∩ ...∩Uik ∈ Ȯ1. Now, the support of f is closed and included in ∪iUi. Thus using Lemma 3: M [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ], Note that if ρ is a pointwise operator with ρ(0) = 0, then ρ(1Uf) = 1Uρ(f) and n∑ k=1 (−1)k ∑ i1<...<ik ρ(f1Ui1∩Ui2∩...∩Uik ) = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik ρ(f) (8) = 1∪iUiρ(f) = ρ(f) . (9) Thus, Mf = ρ(f) where ρ is obtained from Lemma 4 or 5 combined with Prop 1. We conclude by density in Lpω(M,R) or Lpω(M, TM) respectively. This ends the proof. 4 Remarks and conclusion In this work, we have fully characterized non-linear operators which commute under the action of smooth deformations. In some sense, it settles the intuitive fact that commutation with the whole diffeomorphism group is too strong a property, leading to a small, nearly trivial family of non-linear intrinsic operators. While on their own they have limited interest for geometric deep representation learning, they can ‘upgrade’ any family of linear operators associated with any group G ⊂ Diff(M) into a powerful non-linear class — the so-called GDL Blueprint in [4]. Also, this result is a first step towards characterizing the non-linear operators which commute with Gauge transformations and could give useful insights for specifying novel Gauge invariant architectures. We now state a couple of unsolved questions and future work. On the commutativity assumption: Several examples and approximation results [21][35] exist for operators that commute with Lie groups and discrete groups [19]. In this case, it is possible to define a measure on the group that is invariant by the group action (called the Haar measure), which makes it possible to define convolutions. Roughly, non-linear operators covariant with some actions of those groups can be thought of as an approximation by a Group Convolution Neural Networks. It is important to note that the inputs of the operators described in these articles are functions that take real values; the much more general class of inputs that take values in vector bundles is, to our knowledge, not covered in the literature. To our knowledge, we are the first work to study the design of equivariant Neural Networks that process vector fields defined over a manifold. In this setting even for M = Rd, it is unclear which type of non-linear operators commute with smaller groups of symmetry such as the Euclidean group. In fact, a generic question holds for manifolds: for a given symmetry group G, what is elementary non-linear building block of a Neural Network? This could be, for instance, useful to design Neural Networks which are Gauge invariant. It is an open question for future work which would be relevant many applications in physics [16]. Furthermore, the fact that the characterization of diffeomorphism invariant operators we exhibited in this paper is very restrictive opens the way for the study of other non-locally ’smaller’ compact groups; we believe that any results in that direction are completely novel. Example of vector operators for L∞ It is slightly unclear how the vector case p = ∞ can be handled in our framework, yet [1] seems to have interesting insights toward this direction. Linearization of Diff(M) In this work, we considered an exact commutation between operators and a symmetries: however, it is unclear which operators approximatively commute with a given symmetry group. Such operators would be better to linearize a high-dimensional symmetry group like Diff(M). An important instance of non-linear operators that are non-local and that ‘nearly’ commute with diffeomorphisms is the Wavelet Scattering representation [23, 7, 28]. Acknowledgments and Disclosure of Funding EO was supported by the Project ANR-21-CE23-0030 ADONIS and EMERG-ADONIS from Alliance SU. GSP was also supported by France Relance and Median Technologies; he would like to thank very much NeurIPS Foundation for their financial support (NeurIPS 2022 Scholar Award).
1. What is the focus of the paper in terms of geometric deep learning? 2. What are the strengths of the paper regarding its writing, motivation, and proofs? 3. Are there any concerns or suggestions regarding experimental verification or example demonstrations? 4. Do you have any questions regarding the paper's content or contributions? 5. What are the limitations of the paper, if any?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper studies operators mapping vector and scalar fields defined over a manifold. The author demonstrates that the non-linear operators that act on vector fields and commute with the group of diffeomorphisms are scalar multiplications, which implies that D i f f ( M ) is too rich to obtain non-trivial operators. In the case of vector fields L w p ( M , T , M ) , the author demonstrates that these operators are the scalar multiplication. Strengths And Weaknesses As for the strengths of the manuscript, it is well-written and well-motivated. The statements in this paper are well-supported by proof. As for the weaknesses, I prefer to some experiments to verify the proposed the theorems, or provide some examples to show the usage of the proposed methods in the paper. Questions Since I am not an expert in geometric deep learning, I have no more questions about this paper. Limitations The limitations are not explicitly discussed in this paper.
NIPS
Title On Non-Linear operators for Geometric Deep Learning Abstract This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on R. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. N/A This work studies operators mapping vector and scalar fields defined over a manifold M, and which commute with its group of diffeomorphisms Diff(M). We prove that in the case of scalar fields Lpω(M,R), those operators correspond to point-wise non-linearities, recovering and extending known results on Rd. In the context of Neural Networks defined over M, it indicates that point-wise nonlinear operators are the only universal family that commutes with any group of symmetries, and justifies their systematic use in combination with dedicated linear operators commuting with specific symmetries. In the case of vector fields Lpω(M, TM), we show that those operators are solely the scalar multiplication. It indicates that Diff(M) is too rich and that there is no universal class of non-linear operators to motivate the design of Neural Networks over the symmetries of M. 1 Introduction Given a physical domain M and measurements f : M → Y observed over it, one is often interested in processing intrinsic information from f , i.e. consistent with the symmetries of the domain. Let M denote an operator, it can be seen as a non-linear operator acting on measurements. In words, if two measurements f , f̃ = g.f are related by a symmetry g of the domain, like a rigid motion on an observed molecular compound, we would like our processed data M(f) and M(f̃) to be related by the same symmetry — thus that M(g.f) = g.M(f) or equivalently that M commutes with the symmetry transformation of the domain. The study of operators that satisfy such symmetry constraints has played a long and central role in the history of physics and mathematics, motivated by the inherent symmetries of physical laws. More recently, such importance has also extended to the design of machine learning systems, where symmetries improve the sample complexity [25, 3]. For instance, Convolutional Neural Networks build translation symmetry, whereas Graph Neural Networks build permutation symmetry, amongst other examples coined under the ‘Geometric Deep Learning’ umbrella [5, 4]. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Lie groups of transformations are of particular interest, because there exists a precise and systematic framework to build such intrinsic operators. Indeed, for a locally compact group G, it is possible to define a Haar measure which is invariant to the action of G [2]; then a simple filtering along the orbit of G allows to define a class of linear operators that commute with the group action. Examples of locally compact groups are given by specific Lie groups acting on Rd, such as the translations or the rotations Od(R). Often these Lie groups G only act on a manifold M, and one tries to average along the orbit induced by G. Note that it is possible, beyond invariance, to linearize more complex groups of variability like diffeomorphisms Diff(M) [7]. While the description of such linear intrinsic structures is of central mathematical importance and forms the basis of Representation theory [30], in itself is not sufficient to bear fruit in the context of Representation learning using Neural Networks [12]. Indeed, linear operators do not have the capacity to extract rich information needed to solve challenging high-dimensional learning problems. It is therefore necessary to extend the systematic construction and classification of intrinsic operators to the non-linear case. With that purpose in mind, our work aims at studying the class of (non-linear) operators M which commute with the action of the group Diff(M), the diffeomorphisms over M. This approach will lead to a natural class of non-linear intrinsic operators. Indeed, any group G of symmetries is, by definition, a subgroup of Diff(M), and thus commutes with such M [24]. Consequently, obtaining a non-linear invariant to a symmetry groupG could be done by using a cascade of interlacing non-linear operators which commute with Diff(M) and linear operators which commute with G. A notable example of linear operators that are covariant to the Lie group of translations is a given by the convolutions along the orbit of the group. These can be constructed thanks to the canonical Haar measure [32]. However, such an approach fails for infinite dimensional groups, like our object of interest: contrary to Lie groups, Diff(M) is not locally compact and it is thus not possible to define a Haar measure on this group. Our first contribution is to demonstrate that the non-linear operators which act on vector fields (elements of Lpω(M, TM)) and which commute with the group of diffeomorphisms, are actually just scalar multiplications. This implies that Diff(M) is too rich to obtain non-trivial operators. Our second contribution is to demonstrate that non-linear operators acting on signals in Lpω(M,R) are pointwise non-linearities. This fills a gap in the results of [7], and a fortiori justifies the use of point-wise non-linearities in geometric Deep Learning [4]. Let us remark that the study of equivariant operators that take as input vector fields is motivated by the use of Neural Networks in physics, in particular for dynamical systems such as fluid dynamics [8]. For example, one subject of interest in hydrodynamics is how a vector field of velocities evolves; the time evolution of such field is described by a partial differential equation (PDE), the Navier-Stokes equations, in which Neural Networks found recent applications and it is more generally the case of other PDE [31]. Our paper is structured as follows: Sec. 2 introduces the necessary formalism, that we use through this paper: in particular, we formally define the action of diffeomorphism. Then, we state and discuss our theorems in Sec. 3.1 and sketch their proofs in Sec. 3.2. Rigorous proofs of each statement can be found in the Appendix. 2 Problem Setup 2.1 Related work and motivation In this section, we discuss the notion of intrinsic operators, invariant and covariant non-linear operators and linear representation over standard symetry groups. Then, we formally state our objective. Intrinsic Operators As discussed above, in this work we are interested in intrinsic operators M : Lp(M, E) → Lp(M, E), where M is a Riemannian manifold, and E = R or E = TM, capturing respectively the setting of scalar signals and vector fields over M. Lp(M,R) is the space of scalar function f : M → R which p-th power is integrable, similarly Lp(M, TM) is the space of sections of the tangent bundle of M (denoted TM), f : M → TM, which norm ∥f∥ : M → R is in Lp(M,R). Here the notion of ‘intrinsic’ means that M is consistent with an equivalence class induced by a symmetry group G in Lp(M, E): if f, f̃ ∈ Lp(M, E) are related by a transformation g ∈ G (in which case we write f = g.f̃ ), then M(f) = g.M(f̃). Naturally, a stronger equivalence class imposes a stronger requirement towards M , and consequently restrains the complexity of M . We now describe the plausible techniques used to design such operators M . GM-Convolutions The notion ofGM -convolutions [34] is an example of linear covariant operators which commute with the reparametrization of a manifold. In practice, this implies that the weights of a GM -convolution are shared and the action of GM -convolutions is local – two properties that facilitate implementation and point out the similarity with Lie groups. Another example of symmetry group corresponds to the isometry group of a Riemaniann manifold, whose pushforward preserves the tensor metric. In this case, it is well known that isometries [33] are the only diffeomomorphism which commute with a manifold Laplacian. Thus, any linear operators which commute with isometries is stabilized by Laplacian’s eigenspaces. However, little is known on the non-linear counterpart of the symmetry-covariant operators. In this work, we characterize non-linear operators which commute with Diff(M). We will see that such operators are intrinsically defined by Diff(M) and could be combined with any linear operators covariant with a symmetry group G. Non-linear operators It has been shown that Convolutional Neural Networks are dense in the set of non-linear covariant operators [35]. The recipe of the corresponding proof is an extension of the proof of the universal approximation theorem [14]. The Scattering Transform [6, 23] is also an example of a well-understood non-linear operator which corresponds to a cascade of complex wavelet transforms followed by a point-wise modulus non-linearity. This representation provably linearizes small deformations. Compact Lie Groups In the context of geometric Machine Learning [5], there are several relevant notions of equivalence. For instance, we can consider a compact Lie Group G acting on M, and an associated representation in F = {f : M → R}: Given g ∈ G and f ∈ F , then g.f(x) ≜ f(g−1.x) for x ∈ M. We then consider f ∼ f̃ , related by this group action: f̃ = g.f for some g ∈ G. The operators M which are compatible with such group action are referred as being G-equivariant (or covariant to the action of G) in the ML literature [13, 4]. Such groups are typically of finite and small dimension, e.g. the Euclidean transformations of M = Rd, with d = 2 for computer vision applications, or d = 3 for computational biology/chemistry applications. In this case, it is possible to characterize all linear intrinsic operators M as group convolutions [20], leading to a rich family of non-linear intrinsic operators by composing such group convolutions with element-wise non-linear operators, as implemented in modern Neural Networks. We highlight that stability to symetries via non-linear operators finds useful application, in particular for flat manifolds [7]. Isometries Riemanian manifolds M come with a default equivalence class, which is given by isometries. TuM denotes the tangent vector space of M at point u ∈ M. Ifmu : TuM×TuM → R denotes the Riemannian metric tensor at point u ∈ M, a diffeomorphism ψ : M → M is an isometry if gu(v, w) = gψ(u)(dψu(v), dψu(w)) for any u ∈ M and v, w ∈ TuM. In words, isometries are changes of variables that preserve the local distances in the domain. The ensemble of all isometries forms a Lie Group which is locally compact [27]. In this case, one can also build a rich class of intrinsic operators by following the previously explained ‘blueprint’, namely composing linear intrinsic operators with element-wise non-linearities. As a representative example, the LaplaceBeltrami operator of M only depends on intrinsic metric properties [33]: as said above, isometries preserve the invariant subspaces of a Laplacian. Beyond Isometries While isometries are the ‘natural’ transformations of the geometric domain, they cannot express high-dimensional sources of variability; indeed, if M is a d-dimensional complete connected Riemannian manifold, its isometry group has dimension at most d(d+ 1)/2 [10]. This raises the question whether one can characterize intrinsic operators relative to a broader class of transformations. Another class of important symmetries corresponds to the ones which are gauge invariant, i.e. which leads to transformations which preserve the change of parametrization and which are used in [11, 34] through the notion of G-structure. In this work, we consider the class of transformations given by Diff(M), the diffeomorphisms over M. As shown in the Appendix, compactly supported deformations ψ : M → M define bounded linear operators Lψ acting on Lp(M, E) → Lp(M, E), and constitute a far broader class of transformations than isometries. Our proof is mainly based on the use of compactly supported diffeomorphisms. Our objective is to characterize the (non-linear) operators M such that ∀ϕ ∈ Diff(M), LϕM =MLϕ . In other words, we aim to understand continuous operators M that commute with deformations. We will show that such operators are act locally and that they can be descriped explicitly, with simple formula. The commutation condition is visualized in the following diagram: f Lϕ // M ⟲ g M Mf Lϕ // Mg 2.2 Notations We will now formally introduce the mathematical objects of interest in this document. Let (M, g) be an orientable, connected, Riemannian manifold, of finite dimension d ∈ N∗. Let TM denote the tangent bundle of M, i.e. the union of tangent spaces at points u ∈ M. T ∗M is the cotangent bundle of M. g ∈ Γ(T ∗M ⊗ T ∗M) is a section of symmetric definite positive bilinear forms on the tangent bundle of M . It is common to denote ΓB the collection of sections of a bundle B; ∧n T ∗M for n ≤ d is the bundle of n-linear alternated forms of M, and Γ( ∧n T ∗M) is the space of section of this vector bundle over M. For A ⊆ M, we denote A its closure; 1A is the indicator function of A, i.e. which takes value 1 if x ∈ A and 0 otherwise. B(u, r) denotes the ball of radius r around u ∈ M. Any two vectors v, v1 ∈ V in a pre-Hilbert space (with a scalar product ⟨, ⟩) are orthogonal, denoted v ⊥ v1, when ⟨v, v1⟩ = 0. Fix p ∈ [1,+∞[. Any volume form ω ∈ Γ( ∧d T ∗M) defines a (positive) measure on the orientable Riemannian manifold M; the total volume of M is ω(M) := ∫ M 1dω. Let us define L p ω(M, TM), the space of Lp vector fields, defined as the subspace of measurable functions f : M → TM such that f(u) ∈ TuM almost everywhere and ∥f∥pp ≜ ∫ u∈M gu(f(u), f(u)) p 2 dω(x) < +∞ . (1) We will also consider Lpω(M,R) the space of measurable scalar functions (fields) f : M → R that fulfill ∥f∥pp ≜ ∫ u∈M |f(u)|p dω(u) < +∞ . (2) We may write ∥ · ∥ instead of ∥ · ∥p when there is no ambiguity. For a C∞ diffeomorphism ϕ ∈ Diff(M), we will consider the action of Lϕ : Lpω(M, TM) → Lpω(M, TM) which we define for for any f ∈ Lpω(M,R) as Lϕf(u) ≜ dϕ(u) −1.f(ϕ(u)) . Note that this action is contravariant: Lψ◦ϕf(u) = d(ψ ◦ ϕ)−1.f(ψ ◦ ϕ(u)) = LϕLψf(u) For scalar function f ∈ Lpω(M,R), we define the action of ϕ via Lϕf(u) ≜ f(ϕ(u)) . Let A be a measurable set of M and f ∈ Lp(M, E), f1A is the product of f with 1A, i.e. f1A is equal to f on A and 0 elsewhere. In what follows we introduce ’constant’ fields over an open set, they are denoted c1U with U an open subset of M. For scalar fields, a ’constant’ scalar field f(u) is equal to the same constant c ∈ R for any u ∈ U . On the other hand, ’constant’ vector fields f1U are vector fields over U for which there is a chart from U to an open subset of Rd, in which for any u ∈ U f(u) is equal to a constant vector c ∈ Rd; in the vector case we say that the vector field f1U can be straightened. This latter operator is also contravariant. If there is no ambiguity, we will use the same notation Lϕ, whether we apply it to Lpω(M,R) or Lpω(M, TM). We might sometimes refer to Lpω(M,R) or Lpω(M, TM) as Lp(M,R) or Lp(M, TM). Throughout the article we restrict ourselves to ϕ such that Lϕ is a bounded operator. Write supp(ϕ) = {u, ϕ(u) ̸= u} for the support of ϕ and say that ϕ has a compact support if supp(ϕ) is compact. We denote by Diffc(M) ⊂ Diff(M) the set of compactly supported diffeomorphisms. Recall that since M is second-countable, C∞c (M) is dense in Lpω(M,R) and C∞c (M, TM) is dense in Lpω(M, TM). Finally, denote by Od(R) the set of unitary operators on Rd. Throughout the article, we might not write explicitly that equalities hold almost everywhere, since this is the default in Lp spaces. As mentioned earlier, compactly supported diffeomorphisms lead to continuous operators, which is made rigorous by the following lemma whose proof is in the appendix. Lemma 1. If supp(ϕ) is compact, then Lϕ is bounded. 3 Main theorems In this section we present our main results. We first show that any (non-linear) deformationequivariant operator acting on scalar fields must be point-wise (Theorem 1), and then establish that any deformation-equivariant operator acting on vector fields corresponds to a multiplication by a scalar (Theorem 2). 3.1 Theorem statements Now, we are ready to state our two main theorems: Theorem 1 (Scalar case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a Lipschitz continuous operator M : Lpω(M,R) → Lpω(M,R), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a Lipschitz continuous function ρ : R → R that fulfills M [f ](m) = ρ(f(m)) a.e. In that case, we have ρ(0) = 0 if ω(M) = ∞. Theorem 2 (Vector case). Let M be a connected and orientable manifold of dimension d ≥ 1. We consider a continuous operator M : Lpω(M, TM) → Lpω(M, TM), where 1 ≤ p <∞. Then, ∀ϕ ∈ Diff(M) : MLϕ = LϕM is equivalent to the existence of a scalar λ ∈ R such that ∀f ∈ Lpω(M, TM) : M [f ](m) = λf(m) a.e. We highlight that our theorems are quite generic in the sense that they apply to the manifolds usually used in applications or theory, Rd in particular. Remark 1. The scalar case allows to recover standard operators which are exploited for Deep Neural Networks architectures. However, Theorem 2 indicates that the group of diffeomorphism is too rich to obtain non-trivial non-linear operators. Remark 2. The case p = ∞ leads to different results. For instance, in the scalar case we may consider the operator Mf(x) = supy |f(y)| which fulfills LϕMf =MLϕf but is not pointwise. Remark 3. The condition “ω(M) = ∞ =⇒ ρ(0) = 0” in Theorem 1 is necessary, since in the case M = R, the operator Mf(x) ≜ eif(x) is not in Lpω(M,R). Remark 4. The Lipschitz condition in Theorem 1 is crucial, otherwise, Mf(x) = ρ(f(x)) might not be an operator of Lpω(M,R). For instance, if p = 2, M = [0, 1] and Mf(x) = √ f(x), we see that in this case, let f(x) = x, then f ∈ Lpω(M,R) and Mf ̸∈ Lpω(M,R) Remark 5. IfM is not Lipschitz, we can find an example which is not even continuous. The following example holds in both cases, the scalar case and the vector case. In both cases f ∈ Lp(M,R), the only thing that changes is the action of Lϕ on f . M = R, let for all f ∈ Lp(M,R): Mf(x) = 1{z,limy→z f(y)=f(z)}(x)f(x). It is a measurable function. Let us show that this M is a counterexample to the vector case: for any ϕ ∈ Diff(M) and x ∈ R, one has MLϕf(x) = 1{z,limy→z f(ϕ(y))=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (3) = 1{z,limy→ϕ(z) f(y)=f(ϕ(z))}(x) dϕ(x) −1f(ϕ(x)) (4) = 1{z,limy→z f(y)=f(z)}(ϕ(x)) dϕ(x) −1f(ϕ(x)) (5) = LϕMf(x) . (6) However, M is not continuous as changing any function to 0 on Q does not change its norm but changes the set where the limits exists. More precisely let c > 0 be a strictly positive scalar,M [c] = c; let f = c1[x /∈ Q], M [f ] = 0 as {z,∃ limy→z f(ϕ(y))} = ∅. However c = f almost everywhere but M [c] ̸=M [f ] therefore M is not continuous. 3.2 Proof Sketch We now describe the main ideas for proving the Theorems 1 and 2. The appendix contains complete formal arguments and technical lemmata which we omit here due to lack of space. The two proofs share quite some similarities despite substantially different final results. Three ideas guide our proofs: First, we prove that it is possible to localize M on a certain class of open sets which behaves nicely with the manifold structure, the strongly convex sets which we denote as O1. This is closely related to the notion of pre-sheaf [15]. Secondly, we characterize M on small open-sets. In the scalar case, we will study the representation of locally constant functions. In the vector case, we will show that locally, the image M(1Uc) of a vector field c is co-linear to c provided that U is small enough. We will also show that those local properties are independent of the position on the manifold M via a connectedness argument. Thirdly and finally, we combine a compacity and a density argument to extend this characterization to M, which is developed in Sec. 3.3. Throughout the presentation, we will use the following definitions and theorems obtained from other works: Definition 1 (Strong convexity, from [18]). Let O1 be the collection of open sets which are bounded and strongly convex, i.e. such that any points p, q in such a set can be joined by a geodesic contained in the set. Furthermore let Ȯ1 = {A ∈ O1 : ∃B ∈ O1, Ā ⊂ B and ω(Ā\A) = 0}. The intuition behind the definition of Ȯ1 is that all of its elements are contained in a ‘security’ open set,which avoids degenerated effects on the manifold. In particular, this allows to control the boundary of a given open set. Theorem 3 (theorem adapted from [17, 18]). (1) Ȯ1 is a system of neighborhoods. (2) Any element of O1 is diffeomorph to Rd. (3) Both O1 and Ȯ1 are stable by intersection. Theorem 4 (Flowbox theorem, as stated in [9]). Let f, g ∈ C∞c (M, TM). For any m ∈ M with f(m) ̸= 0 and g(m) ̸= 0, there exists an open set U ⊂ M and ϕ ∈ Diff(M) such that ϕ(m) = m and Lϕ(1Uf) = 1ϕ(U)g. We will now present some lemmata that are necessary for the proofs of theorems 1 and 2. As a first step, we argue that one may assume M(0) = 0 where 0 denotes the constant 0-function. This is because in the appendix we show that M(0) is a constant function C, with C = 0 if ω(M) = ∞. Therefore, we may substract C from ρ and λ, leaving us with having to show the theorems only for M(0) = 0. Next, a key idea of the proof is to exploit the flexibility of the deformation equivariance to localise the input, i.e. to show that the image of compactly supported functions is also compactly supported. To do so, the following lemma provides a way of collapsing an open ball into a singleton while maintaining a good control on the support of the diffeomorphism. Lemma 2 (Key lemma). Let ϵ > 0. There exists a sequence of diffeomorphisms ϕn : Rd → Rd, compactly supported in B(0, 1 + ϵ) such that: ϕn(B(0, 1)) = B(0, 1 n ) , and sup u∈B(0,1) ∥dϕn(u)∥ ≤ 1 n . Proof. Set ϕn(u) = fn(∥u∥)u, where fn(r) = { 1 n , if |r| ≤ 1 1 , if |r| ≥ 1 + ϵ , and fn is smoothly interpolated for |r| ∈ [1, 1 + ϵ] in a way that it remains nondecreasing. It is then clear that ϕn fulfills the desired properties. We will often use that if the support of ϕ ∈ Diff(M) is such that supp(ϕ) ∩ U = ∅, then for any f ∈ Lpω(M,R) one has 1Uf = Lϕ(1Uf). This implies the following important lemma, for which a rigorous proof can be found in the appendix: Lemma 3. Let U ∈ Ȯ1 and M as in Theorem 1 or Theorem 2. Then, for any f ∈ E, where E = Lpω(M,R) or E = Lpω(M, TM) respectively, we have: M [f1U ] = 1UM [f ] . Furthermore, if U is any closed set, the same conclusion applies. Equipped with this result, our proof will characterize the image of functions of the type c1U where either c ∈ R, or c is a vector field which can be straightened (isomorphic to a constant vector), via the following Lemma. In the Vector case: Lemma 4 (Image of localized vector field). For M as in Theorem 2 there is U ∈ Ȯ1, and λ(U) such that for any f ∈ Lpω(M,TM): M [f1U ] = 1Uλ(U)f . (7) Here is the scalar case: Lemma 5 (Image of constant functions, scalar case). Let M as in Theorem 1. For any U ∈ Ȯ1 and c ∈ R, then: M(c1U ) = h(c, U)1U . Furthermore, c→ h(c, U) is Lipschitz for any U ∈ Ȯ1. At this stage, we note that both representations are point-wise, and the next steps of the proofs will be identical both for the scalar and vector cases. The extension to Lpω(M,R) or Lpω(M, TM) will be done thanks to: Lemma 6 (Image of a disjoint union of opensets). Let U1, ..., Un ∈ O1 and M as in Theorem 2 or Theorem 1, s.t. ∀i ̸= j, Ui ∩ Uj = ∅. Then for any f ∈ Lpω(M, TM): M [ n∑ i=1 1Uif ] = n∑ i=1 M [1Uif ] . This lemma states that we can completely characterize M on disjoint union of simple sets. We will then need an argument similar to Vitali covering Lemma in order to "glue" those open sets together, which shows that simple functions with disjoint support can approximate any elements of Lpω(M,R) or Lpω(M, TM) (we only state the lemma for Lpω(M,R) as our proof on Lpω(M, TM) does not necessarily need this result): Lemma 7 (Local Vitali). For f ∈ C∞c (M) and m ∈ M, there exists U ∈ Ȯ1 with m ∈ U , such that for any ϵ > 0, there exist subsets U1, ..., Un ∈ Ȯ1 with Ui ⊂ U and numbers c1, ..., cn ∈ R such that: ∥ ∑ n 1Uncn − 1Uf∥ < ϵ . Note that this type of covering is not possible on any open set without further assumptions on the manifold, such as bounds on its Ricci curvature [22]. Fortunately, we will only need a local version which is true because charts are locally bi-Lipschitz. Both Lemma 6 and Lemma 7 imply that: Proposition 1. Consider M from either Theorem 1 or 2. Assume that there exists U ∈ Ȯ1 such that M(c1V ) = h(c, V )1V for any V ⊂ U , with V ∈ Ȯ1, where c is either a vector field in the case E = Lpω(M, TM) or a constant scalar in the case E = Lpω(M,R). If we further assume that c→ h(c, U) is L-Lipschitz, then ∀f ∈ E,∀m ∈ M,M [1Uf ](m) = 1Uh(f(m), U) . Furthermore, it does not depend on U , meaning that for any other such Ũ , we have: ∀f ∈ E,∀m ∈ U ∩ Ũ ,M [1Ũf ](m) = 1Uh(f(m), U) . We briefly discuss the intuition behind Theorem 2. It is linked to the idea that the operators M at hand have to commute with local rotations, and this even for locally constant vector fields. We reduce the characterisation of deformation-equivariant vector operators using an invariance to symmetry argument: functions which are invariant to rotations are multiples of a scalar. The intuition is contained in the following lemma, which is commonly used in physics: Lemma 8 (Invariance to rotation). Let f : Rd → Rd such that for any W ∈ Od(R) and x ∈ Rd, one has f(Wx) =Wf(x). Then, there is λ : Rd → R, f(x) = λ(∥x∥)x. Proof. We write f(x) = λ(x)x+x⊥, with x⊥(m) ̸= 0 and x⊥ ⊥ x. Then, we introduceW ∈ Od(R) such that Wx⊥(m) = −x⊥(m) and Wx(m) = x(m). From f(x) = f(Wx) =Wf(x) we deduce that x⊥ = 0. Next, λ(Wx) = λ(x) thus λ(x) = λ(x′) for any ∥x∥ = ∥x′∥. Distinction between scalar and vector case The scalar case is simpler to handle than the vector case: there are several more steps for the proof of Theorem 2, one needs to show that the point-wise non-linearity is actually a scalar multiplication. We also highlight that the non-linearity is fully defined by its image on locally constant functions. Finally, we conclude the proof of the theorem by appealing to a common density argument of the functions smooth with compact support, combing all the lemmata we have just presented in Sec. 3.3. 3.3 Proofs conclusions (common to the scalar and vector case) In this section, we prove that the local properties of M can be extended globally on M. The main idea is to exploit the well-known Poincaré’s formula, which states that: 1∪iUi = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik , and to localize the action of M on each Ui1 ∩ Ui2 ∩ ... ∩ Uik ∈ Ȯ1 thanks to Lemma 3. Proof of Theorem 1 and Theorem 2. Let f be a smooth and compactly supported function. Further consider ∪i≤nUi a finite covering of its support with Ui ∈ Ȯ1. Using an inclusion-exclusion formula together with Lemma 3, we obtain 1∪iUiM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩UikM [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ] , where we used that Ui1 ∩Ui2 ∩ ...∩Uik ∈ Ȯ1. Now, the support of f is closed and included in ∪iUi. Thus using Lemma 3: M [f ] = n∑ k=1 (−1)k ∑ i1<...<ik M [f1Ui1∩Ui2∩...∩Uik ], Note that if ρ is a pointwise operator with ρ(0) = 0, then ρ(1Uf) = 1Uρ(f) and n∑ k=1 (−1)k ∑ i1<...<ik ρ(f1Ui1∩Ui2∩...∩Uik ) = n∑ k=1 (−1)k ∑ i1<...<ik 1Ui1∩Ui2∩...∩Uik ρ(f) (8) = 1∪iUiρ(f) = ρ(f) . (9) Thus, Mf = ρ(f) where ρ is obtained from Lemma 4 or 5 combined with Prop 1. We conclude by density in Lpω(M,R) or Lpω(M, TM) respectively. This ends the proof. 4 Remarks and conclusion In this work, we have fully characterized non-linear operators which commute under the action of smooth deformations. In some sense, it settles the intuitive fact that commutation with the whole diffeomorphism group is too strong a property, leading to a small, nearly trivial family of non-linear intrinsic operators. While on their own they have limited interest for geometric deep representation learning, they can ‘upgrade’ any family of linear operators associated with any group G ⊂ Diff(M) into a powerful non-linear class — the so-called GDL Blueprint in [4]. Also, this result is a first step towards characterizing the non-linear operators which commute with Gauge transformations and could give useful insights for specifying novel Gauge invariant architectures. We now state a couple of unsolved questions and future work. On the commutativity assumption: Several examples and approximation results [21][35] exist for operators that commute with Lie groups and discrete groups [19]. In this case, it is possible to define a measure on the group that is invariant by the group action (called the Haar measure), which makes it possible to define convolutions. Roughly, non-linear operators covariant with some actions of those groups can be thought of as an approximation by a Group Convolution Neural Networks. It is important to note that the inputs of the operators described in these articles are functions that take real values; the much more general class of inputs that take values in vector bundles is, to our knowledge, not covered in the literature. To our knowledge, we are the first work to study the design of equivariant Neural Networks that process vector fields defined over a manifold. In this setting even for M = Rd, it is unclear which type of non-linear operators commute with smaller groups of symmetry such as the Euclidean group. In fact, a generic question holds for manifolds: for a given symmetry group G, what is elementary non-linear building block of a Neural Network? This could be, for instance, useful to design Neural Networks which are Gauge invariant. It is an open question for future work which would be relevant many applications in physics [16]. Furthermore, the fact that the characterization of diffeomorphism invariant operators we exhibited in this paper is very restrictive opens the way for the study of other non-locally ’smaller’ compact groups; we believe that any results in that direction are completely novel. Example of vector operators for L∞ It is slightly unclear how the vector case p = ∞ can be handled in our framework, yet [1] seems to have interesting insights toward this direction. Linearization of Diff(M) In this work, we considered an exact commutation between operators and a symmetries: however, it is unclear which operators approximatively commute with a given symmetry group. Such operators would be better to linearize a high-dimensional symmetry group like Diff(M). An important instance of non-linear operators that are non-local and that ‘nearly’ commute with diffeomorphisms is the Wavelet Scattering representation [23, 7, 28]. Acknowledgments and Disclosure of Funding EO was supported by the Project ANR-21-CE23-0030 ADONIS and EMERG-ADONIS from Alliance SU. GSP was also supported by France Relance and Median Technologies; he would like to thank very much NeurIPS Foundation for their financial support (NeurIPS 2022 Scholar Award).
1. What are the key contributions and findings of the paper regarding symmetry in machine learning? 2. How does the paper support the use of nonlinear activation functions and linear operations in geometric deep learning? 3. What are the strengths and weaknesses of the paper, particularly in terms of mathematical density and readability? 4. Are there any suggestions for improving the clarity and notation of the paper? 5. What are the limitations and unsolved questions mentioned by the authors?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper contributes two main proofs that support the practice of using non linear activation functions in conjunction with linear operations in geometric deep learning. They also prove that Diff(M) is too rich and that there are no universal class of non-linear operators to motivate the design of Neural Networks over symmetries of M. Strengths And Weaknesses Strengths: The paper addresses important issues of symmetries in machine learning. It also proves that no universal class of nonlinear operators exist that can handle all possible symmetries in a manifold. This is important to know when designing neural networks. Thus the significance, quality, and originality of the paper are good. Weakness: The paper is very dense mathematically and difficult to read for someone who is not embedded in the literature on this topic. Questions One thing I suggest to the authors to help with the clarity is to define more of their notation. Much of the notation is not defined in the main paper despite having an entire subsection dedicated to it. This could be improved as there is plenty of space in the paper. Limitations The authors discussed some of the unsolved questions that remain.
NIPS
Title Universally Quantized Neural Compression Abstract A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization. 1 Introduction Over the last four years, deep learning research into lossy image compression has seen tremendous progress. End-to-end trained neural networks have gone from barely beating JPEG2000 [4] to outperforming the best manually designed compression schemes for images [36, 2]. Despite this success, many challenges remain before end-to-end trained compression becomes a viable alternative to more traditional codecs. Computational complexity, temporal inconsistencies, and perceptual metrics which are effective yet easy to optimize are some of the challenges facing neural networks. In this paper we focus on the issue of quantization. Practical lossy compression schemes rely on quantization to compute a discrete representation which can be transmitted digitally. But quantization is a non-differentiable operation and as such prevents us from optimizing encoders directly via backpropagation [33]. A common workaround is to replace quantization with a differentiable approximation during training but to use quantization at test time [e.g., 32, 4, 1]. However, it is unclear how much this mismatch between training and test phases is hurting performance. A promising alternative is to get rid of quantization altogether [15]. That is, to communicate information in a differentiable manner both at training and at test time. At the heart of this approach is the insight that we can communicate a sample from a possibly continuous distribution using a finite number of bits, also known as the reverse Shannon theorem [8]. However, existing realizations of this approach tend to be either computationally costly or statistically inefficient, that is, they require more bits than they transmit information. Here, we bridge the gap between the two approaches of dealing with quantization. A popular approximation for quantization is additive uniform noise [4, 5]. In Section 3.2, we show that additive uniform noise can be viewed as an instance of compression without quantization and describe a technique for implementing it at test time. Unlike other approaches to quantizationless compression, this technique is both statistically and computationally efficient. In Section 4.1, we show how to smoothly interpolate between uniform noise and hard quantization while maintaining differentiability. ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. We further show that it is possible to analytically integrate out noise when calculating gradients and in some cases drastically reduce their variance (Section 4.2). Finally, we evaluate our approach empirically in Section 5 and find that a better match between training and test phases leads to improved performance especially in models of lower complexity. 2 Related work Most prior work on end-to-end trained lossy compression optimizes a rate-distortion loss of the form log2 P (bf(x)e) + d(x, g(bf(x)e)). (1) Here, f is an encoder, g is a decoder, P is a probability mass function and they may all depend on parameters we want to optimize. The distortion d measures the discrepancy between inputs and reconstructions and the parameter > 0 controls the trade-off between it and the number of bits. The rounding function b·e used for quantization and the discreteness of P pose challenges for optimizing the encoder. Several papers have proposed methods to deal with quantization for end-to-end trained lossy compression. Toderici et al. [32] replaced rounding with stochastic rounding to the nearest integer. Theis et al. [31] applied hard quantization during both training and inference but used straight-through gradient estimates to obtain a training signal for the encoder. Agustsson et al. [1] used a smooth approximation of vector quantization that was annealed towards hard quantization during training. Most relevant for our work is the approach taken by Ballé et al. [4], who proposed to add uniform noise during training, log2 p(f(x) + u) + d(x, g(f(x) + u)), (2) as an approximation to rounding at test time. Here, p is a density and u is a sample of uniform noise drawn from U([ 0.5, 0.5)D). If the distortion is a mean-squared error, then this approach is equivalent to a variational autoencoder [25, 18] with a uniform encoder [5, 31]. Another line of research studies the simulation of noisy channels using a noiseless channel, that is, the reverse of channel coding. In particular, how can we communicate a sample z from a conditional distribution (the noisy channel), q(z | x), using as few bits as possible (the noiseless channel)? The reverse Shannon theorem of Bennett and Shor [8] shows that it is possible to communicate a sample using a number of bits not much larger than the mutual information between X and Z, I[X,Z]. Existing implementations of reverse channel coding operate on the same principle. First, a large number of samples zn is generated from a fixed distribution p. Importantly, this distribution does not depend on x and the same samples can be generated on both the sender’s and the receiver’s side using a shared source of randomness (for our purposes this would be a pseudorandom number generator with a fixed seed). One of these samples is then selected and its index n communicated digitally. The various methods differ in how this index is selected. Cuff [11] provided a constructive achievability proof for the mutual information bound using an approach which was later dubbed the likelihood encoder [12]. In this approach the index n is picked stochastically with a probability proportional to p(x | zn). An equivalent approach dubbed MIRACLE was later derived by Havasi et al. [15] using importance sampling. In contrast to Cuff and Song [12], Havasi et al. [15] considered communication of a single sample from q instead of a sequence of samples. MIRACLE also represents the first application of quantizationless compression in the context of neural networks. Originally designed for model compression, it was recently adapted to the task of lossy image compression [13]. An earlier but computationally more expensive method based on rejection sampling was described by Harsha et al. [14]. Li and Gamal [21] described a simple yet efficient approach. The authors proved that it uses at most I[X,Z] + log2(I[X,Z] + 1) + 4 (3) bits on average. To our knowledge, this is the lowest known upper bound on the bits required to communicate a single sample. The overhead is still significant if we want to communicate a small amount of information but becomes negligible as the mutual information increases. Finally, we will rely heavily on results on uniform dither and universal quantization [29, 37, 35] to communicate a sample from a uniform distribution (Section 3.2). Choi et al. [9] used universal quantization as a relaxation of hard quantization. However, universal quantization was used in a manner that still produced a non-differentiable loss, which the authors dealt with by using straightthrough gradient estimates [7]. In contrast, here we will use fully differentiable losses during training and use the same method of encoding at training and at test time. Roberts [27] applied universal quantization directly to grayscale pixels and found it lead to superior picture quality compared to quantization. 3 Compression without quantization Instead of approximating quantization or relying on straight-through gradient estimates, we would like to use a differentiable channel and thus eliminate any need for approximations during training. Existing methods to simulate a noisy channel qZ|x require simulating a number of random variables Zn ⇠ pZ which is exponential in DKL[qZ|x || pZ ] for every x we wish to communicate [e.g., 15]. Since the mutual information I[X,Z] is a lower bound on the average Kullback-Leibler divergence, this creates a dilemma. On the one hand, we would like to keep the divergence small to limit the computational cost. For example, by encoding blocks of coefficients (sometimes also referred to as “latents”) separately [15, 13]. On the other hand, the information transmitted should be large to keep the statistical overhead small (Equation 3). One might hope that more efficient algorithms exist which can quickly identify an index n without having to explicitly generate all samples. However, such an algorithm is not possible as it would allow us to efficiently sample distributions which are known to be hard to simulate even approximately (in terms of total variation distance, DTV) [22]. More precisely, we have the following lemma. Lemma 1. Consider an algorithm which receives a description of an arbitrary probability distribution q as input and is also given access to an unlimited number of i.i.d. random variables Zn ⇠ p. It outputs Z ⇠ q̃ such that its distribution is approximately q in the sense that DTV[q̃, q] 1/12. If RP 6= NP , then there is no such algorithm whose time complexity is polynomial in DKL[q || p]. A proof and details are provided in Appendix B. In order to design efficient algorithms for communicating samples, the lemma implies we need to make assumptions about the distributions involved. 3.1 Uniform noise channel A particularly simple channel is the additive uniform noise channel, Z = f(x) +U , U ⇠ U([ 0.5, 0.5)D). (4) Replacing quantization with uniform noise during training is a popular strategy for end-to-end trained compression [e.g., 4, 5, 36]. In the following, however, we are no longer going to view this as an approximation to quantization but as a differentiable channel for communicating information. The uniform noise channel turns out to be easy to simulate computationally and statistically efficiently. 3.2 Universal quantization For a fixed y 2 R, universal quantization is quantization with a random offset, by Ue+ U, U ⇠ U([ 0.5, 0.5)). (5) This form of quantization has the remarkable property of being equal in distribution to adding uniform noise directly [27, 29, 37]. That is, by Ue+ U ⇠ y + U 0, (6) where U 0 is another source of identical uniform noise. This property has made universal quantization a useful tool for studying quantization, especially in settings where quantization noise Y bY e is roughly uniform. Here, we are interested in it not as an approximation but as a way to simulate a differentiable channel for communicating information. At training time, we will add uniform noise as in prior work [4, 5]. For deployment, we propose to use universal quantization instead of switching to hard quantization, thereby eliminating the mismatch between training and test phases. If Y is a random variable representing a coefficient produced by a transform, the encoder calculates discrete K = bY Ue and transmits it to the decoder. The decoder has access to U and computes K +U . How many bits are required to encode K? Zamir and Feder [35] showed that the conditional entropy of K given U is H[K | U ] = I[Y, Y + U ] = h[Y + U ]. (7) This bound on the coding cost has two important properties. First, being equivalent to the differential entropy of Y + U means it is differentiable if the density of Y is differentiable. Second, the cost of transmitting K is equivalent to the amount of information gained by the decoder. In contrast to other methods for compression without quantization (Equation 3), the number of bits required is only bounded by the amount of information transmitted. In practice, we will use a model to approximate the distribution of Y + U from which the distribution of K can be derived, P (K = k | U = u) = pY+U (k + u). Here, pY+U is the same density that occurs in the loss in Equation 2. Another advantage of universal quantization over more general reverse channel coding schemes is that it is much more computationally efficient. Its computational complexity grows only linearly with the number of coefficients to be transmitted instead of exponentially with the number of bits. Universal quantization has previously been applied to neural networks using the same shift for all coefficients, Ui = Uj [9]. We note that this form of universal quantization is not equivalent to adding either dependent or independent noise during training. Adding dependent noise would not create an information bottleneck, since a single coefficient which is always zero could be used by the decoder to recover the noise and therefore the exact values of the other coefficients. In the following, we will always assume independent noise as in Equation 4. Generalizations to other forms of noise such as Gaussian noise are possible and are discussed in Appendix C. Here, we will focus on a simple uniform noise channel (Section 3.2) as frequently used in the neural compression literature [4, 5, 23, 36]. 4 Compression with quantization While the uniform noise channel has the advantage of being differentiable, there are still scenarios where we may want to use quantization. For instance, under some conditions universal quantization is known to be suboptimal with respect to mean squared error (MSE) [34, Theorem 5.5.1]. However, this assumes a fixed encoder and decoder. In the following, we show that quantization is a limiting case of universal quantization if we allow flexible encoders and decoders. Hence it is possible to recover any benefits quantization might have while maintaining a differentiable loss function. 4.1 Simulating quantization with uniform noise We first observe that applying rounding as the last step of an encoder and again as the first step of a decoder would eliminate the effects of any offset u 2 [ 0.5, 0.5), bbye+ ue = bye. (8) This suggests that we may be able to recover some of the benefits of hard quantization without sacrificing differentiability by using a smooth approximation to rounding, s(s(y) + u) ⇡ bye. (9) We are going to use the following function which is differentiable everywhere (Appendix C): s↵(y) = byc+ 1 2 tanh(↵r) tanh(↵/2) + 1 2 , where r = y byc 1 2 . (10) The function is visualized in Figure 1A. Its parameter ↵ controls the fidelity of the approximation: lim ↵!0 s↵(y) = y, lim ↵!1 s↵(y) = bye. (11) After observing a value z for random variable s↵(Y ) + U , we can do slightly better if our goal is to minimize the MSE of Y . Instead of soft rounding twice, the optimal reconstruction is obtained with r↵(s↵(y) + u), where r↵(z) = E[Y | s↵(Y ) + U = z]. (12) It is not difficult to see that p(y | z) / y 2 (s 1↵ (z 0.5), s 1↵ (z + 0.5)] p(y), (13) where evaluates to 1 if its argument is true and 0 otherwise. That is, the posterior over y is a truncated version of the prior distribution. If we assume that the prior is smooth enough to be approximately uniform in each interval, we have E[Y | s↵(Y ) + U = z] ⇡ s 1 ↵ (z 0.5) + s 1↵ (z + 0.5) 2 = s 1↵ (z 0.5) + 0.5. (14) where we have used that s↵(z + 1) = s↵(z) + 1. We will assume this form for r↵ going forward for which we still have that lim↵!1 r↵(s↵(y) + u) = bye, (15) that is, we recover hard quantization as a limiting case. Thus in cases where quantization is desirable, we can anneal ↵ towards hard quantization during training while still having a differentiable loss. Smooth approximations to quantization have been used previously though without the addition of noise [1]. Note that soft rounding without noise does not create a bottleneck since the function is invertible and the input coefficients can be fully recovered by the decoder. Thus, Equation 15 offers a more principled approach to approximating quantization. 4.2 Reducing the variance of gradients When ↵ is large, the derivatives of s↵ and r↵ tend to be close to zero with high probability and very large with low probability. This leads to gradients for the encoder with potentially large variance. To compensate we propose to analytically integrate out the uniform noise as follows. Let h : R ! R be a differentiable function and, as before, let U ⇠ U([ 0.5, 0.5)) be a uniform random variable. We are interested in computing the following derivative: d dy E[h(y + U)] = E d dy h(y + U) . (16) To get a low-variance estimate of the expectation’s derivative we could average over many samples of U . However, note that we also have d dy E[h(y + U)] = d dy Z y+0.5 y 0.5 h(y + u)du = h(y + 0.5) h(y 0.5). (17) That is, the gradient of the expectation can be computed analytically with finite differences. Furthermore, Equation 17 allows us to evaluate the derivative of the expectation even when h is not differentiable. Now consider the case where we apply h pointwise to a vector y +U with U ⇠ U([ 0.5, 0.5)D) followed by a multivariable function ` : RD ! R. Then @ @yi E [`(h(y +U))] = E @ @zi `(Z) Z=h(y+U) · @ @yi h(yi + Ui) (18) ⇡ E @ @zi `(Z) Z=h(y+U) · E @ @yi h(yi + Ui) (19) = E @ @zi `(Z) Z=h(y+U) · (h(yi + 0.5) h(yi 0.5)), (20) where the approximation in (19) is obtained by assuming the partial derivative @@zi `(Z) is uncorrelated with @@yih(yi + Ui). This would hold, for example, if ` were locally linear around h(y) such that its derivative is the same for any possible perturbed value h(y + u). Equation 20 corresponds to the following modification of backpropagation: the forward pass is computed in a standard manner (that is, evaluating `(h(y + u)) for a sampled instance u), but in the backward pass we replace the derivative @@yih(yi + ui) with its expected value, h(yi + 0.5) h(yi 0.5). Consider a model where soft-rounding follows the encoder, y = s↵(f(x)), and a factorial entropy model is used. The rate-distortion loss becomes P i E[log2 pi(yi + Ui)] + E [d(x, g(r↵(y +U)))] . (21) We can apply Equation 17 directly to the rate term to calculate the gradient of y (Figure 1B). For the distortion term we use Equation 20 where r↵ takes the role of h. Interestingly, for the softrounding function and its inverse the expected derivative takes the form of a straight-through gradient estimate [7]. That is, the expected derivative is always 1. Given a cumulative distribution cY for Y , the density of Z = s(Y ) + U can be shown to be ps(Y )+U (y) = cY (s 1(y) + 0.5) cY (s 1(y) 0.5). (22) We use this result to parametrize the density of Z (see Appendix E for details). Figure 1B illustrates such a model where Y is assumed to have a logistic distribution. 5 Experiments 5.1 Models We conduct experiments with two models: (a) a simple linear model and (b) a more complex model based on the hyperprior architecture proposed by Ballé et al. [6] and extended by Minnen et al. [23]. The linear model operates on 8x8 blocks similar to JPEG/JFIF [16]. It is implemented by setting the encoder f to be a convolution with a kernel size of 8, a stride of 8, and 192 output channels. The decoder g is set to the corresponding transposed convolution. Both are initialized independently with random orthogonal matrices [28]. For the density model we use the non-parametric model of Ballé et al. [6] but adjusted for soft-rounding (Appendix E). The hyperprior model is a much stronger model and is based on the (non-autoregressive) "Mean & Scale Hyperprior" architecture described by Minnen et al. [23]. Here, the coefficients produced by a neural encoder f , y = f(x), are mapped by a second encoder h to “hyper latents” v = h(y). Uniform noise is then applied to both sets of coefficients. A sample w = v + u1 is first transmitted and subsequently used to conditionally encode a sample z = y + u2. Finally, a neural decoder computes the reconstruction as x̂ = g(z). Following previous work, the conditional distribution is assumed to be Gaussian, p(y | w) = N (y;mµ(w),m (w)) , (23) where mµ and m are two neural networks. When integrating soft quantization into the architecture, we center the quantizer around the mean prediction mµ(w), y = s↵(f(x) mµ(w)), z = y + u2, x̂ = g(r↵(z) +mµ(w)). (24) and adjust the conditional density accordingly. This corresponds to transmitting the residual between y and the mean prediction mµ(w) (soft rounded) across the uniform noise channel. As for the linear model, we use a non-parametric model for the density of v. We consider the following three approaches for each model: Uniform Noise + Quantization: The model is trained with uniform noise but uses quantization for inference. This is the approach that is widely used in neural compression [e.g., 4, 5, 23, 36]. We refer to this setting as UN + Q or as the "test-time quantization baseline". Uniform Noise + Universal Quantization: Here the models use the uniform noise channel during training as well as for inference, eliminating the train-test mismatch. We refer to this setting as UN + UQ. As these models have the same training objective as UN + Q, we can train a single model and evaluate it for both settings. Uniform Noise + Universal Quantization + Soft Rounding: Here we integrate a soft quantizer (Section 4.1) into the uniform noise channel (both during training and at test time), recovering the potential benefits of quantization while maintaining the match between training and test phases using universal quantization. We refer to this setting as UN + UQ + SR. 5.2 Traininig The training examples are 256x256 pixel crops extracted from a set of 1M high resolution JPEG images collected from the internet. The images’ initial height and width ranges from 3,000 to 5,000 pixels but images were randomly resized such that the smaller dimension is between 533 and 1,200 pixels before taking crops. We optimized all models for mean squared error (MSE). The Adam optimizer [19] was applied for 2M steps with a batch size of 8 and a learning rate of 10 4 which is reduced to 10 5 after 1.6M steps. For the first 5,000 steps only the density models were trained and the learning rates of the encoder and decoder transforms were kept at zero. The training time was about 30 hours for the linear models and about 60 hours for the hyperprior models on an Nvidia V100 GPU. For the hyperprior models we set = 2i for i 2 { 6, · · · , 1} and decayed it by a factor of 110 after 200k steps. For the linear models we use slightly smaller = 0.4 · 2i and reduced it by a factor of 12 after 100k steps and again after 200k steps. For soft rounding we linearly annealed the parameter ↵ from 1 to 16 over the full 2M steps. At the end of training, ↵ is large enough that soft rounding gives near identical results to rounding. 5.3 Results We evaluate all models on the Kodak [20] dataset by computing the rate-distortion (RD) curve in terms of bits-per-pixel (bpp) versus peak signal-to-noise ratio (PSNR). In Figure 2A we show results for the linear model. When comparing the UN + UQ model which uses universal quantization to the test-time quantization baseline UN + Q, we see that despite the train-test mismatch using quantization improves the RD-performance at test-time (hatched area). However, looking at UN + UQ + SR, we obtain an improvement in terms of RD performance (shaded area) over the test-time quantization baseline. In Figure 3A we can observe similar albeit weaker effects for the hyperprior model. There is again a performance gap between UN + Q and UN + UQ. Introducing soft rounding again improves the RD performance, outperforming the test-time quantization baseline at low bitrates. The smaller difference can be explained by the deeper networks’ ability to imitate functionality otherwise performed by soft-rounding. For example, r↵ has a denoising effect which a powerful enough decoder can absorb. In Figure 2B we illustrate the effect of using expected gradients on the linear model. We did not observe big differences when using the same ↵ with s↵ and r↵ (not shown). However, using s7 and r16 we saw significant speedups in convergence and gaps in performance at high bitrates. For the hyperprior model we find expected gradients beneficial both in terms of performance and stability of training. In Figure 3B, we consider the UN + UQ + SR setting using either the linear schedule ↵ = 1, · · · , 16 or alternatively a fixed ↵ 2 {7, 13}, with and without expected gradients. We found that for ↵ > 7, the models would not train stably (especially at the higher bitrates) without expected gradients (both when annealing ↵ and for fixed ↵ = 13) and obtained poorer performance. In summary, for both the linear model and the hyperprior we observe that despite a train-test mismatch, the effect of the quantization is positive (UN + Q vs UN + UQ), but that further improvements can be gained by introducing soft rounding (UN + UQ + SR) into the uniform noise channel. Furthermore we find that expected gradients are helpful to speed up convergence and stabilize training. 6 Conclusion The possibility to efficiently communicate samples has only recently been studied in information theory [10, 11] and even more recently been recognized in machine learning [15, 13]. We connected this literature to an old idea from rate-distortion theory, uniformly dithered or universal quantization [27, 29, 37, 35], which allows us to efficiently communicate a sample from a uniform distribution. Unlike more general approaches, universal quantization is computationally efficient. This is only possible because it considers a constrained class of distributions, as shown in Lemma 1. Intriguingly, universal quantization makes it possible to implement an approach at test time which was already popular for training neural networks [4]. This allowed us to study and eliminate existing gaps between training and test losses. Furthermore, we showed that interpolating between the two approaches in a principled manner is possible using soft-rounding functions. For ease of training and evaluation our empirical findings were based on MSE. We found that already here a simple change can lead to improved performance, especially for models of low complexity. However, generative compression [26, 2] may benefit more strongly from compression without quantization. Theis et al. [31] showed that uniform noise and quantization can be perceptually very different, suggesting that adversarial and other perceptual training losses may be more sensitive to a mismatch between training and test phases. Roberts [27] found that replacing quantization with dithered quantization can improve picture quality when applied directly to graycale pixels. Similarly, we find that reconstructions of the linear model have visible blocking artefacts when using quantization, as would be expected given the model’s similarity to JPEG/JFIF [16]. In contrast, universal quantization masks the blocking artefacts almost completely at the expense of introducing grain (Appendix G). Finally, here we only studied one-dimensional uniform dither. Two generalizations are discussed in Appendix C and may provide additional advantages. We hope that our paper will inspire work into richer classes of distributions which are easy to communicate in a computationally efficient manner. Broader Impact Poor internet connectivity and high traffic costs are still a reality in many developing countries [3]. But also in developed countries internet connections are often poor due to congestion in crowded areas or insufficient mobile network coverage. By improving compression rates, neural compression has the potential to make information more broadly available. About 79% of global IP traffic is currently made up of videos [17]. This means that work on image and video compression in particular has the potential to impact a lot of people. Assigning fewer bits to one image is only possible by simultaneously assigning more bits to other images. Care needs to be taken to make sure that training sets are representative. Generative compression in particular bears the risk of misrepresenting content but is outside the scope of this paper. Acknowledgments We would like to thank Johannes Ballé for helpful discussions and valuable comments on this manuscript. This work was performed and funded by Google.
1. What is the focus and contribution of the paper regarding lossy compression techniques for encoders? 2. What are the strengths of the proposed approach, particularly in its novel ideas and applications? 3. What are the weaknesses of the paper, especially regarding its experimental results and effectiveness in practice? 4. Do you have any concerns about the motivation and main contributions of the paper? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper studies lossy compression technique for encoders. Unlike popular previous strategy to train the encoder with a differentiable approximation of the hard quantization by adding uniform noise, this paper first proposes to use universal quantization in both training and testing phase, which avoids the train-test mismatch issue. To deal with cases where one still wants to use quantization for training, the authors propose to use a smooth approximation to quantizing function based on universal quantization scheme, and provide a method to reduce the variance of its gradient computation. Finally, the authors provide experiments on Kodak dataset showing that with the proposed strategy, we can get lower PSNR and test loss. Strengths 1. The topic is useful in practice with many applications. It's will be interesting for many communities. 2. The ideas of adopting universal quantization in neural compression and replacing quantizing functions with a smooth surrogate are novel. Weaknesses The motivation is not clear enough for me. In other words, it seems that the result does not resolve the problem addressed. On the upside, it is absolutely a good attempt to apply universal quantization to this compression problem. However, the main concern is the effectiveness of this approach in practice. From Figure 2 and Figure 3, we see that using solely the proposed universal quantization (UN+UQ) is much worse than prior method directly using hard quantization (UN+Q). And, the performance of using additional soft rounding (UN+UQ+SR) is just slightly better than UN+Q, especially on the complicated model where the performances are almost the same. However, SR method introduces two more parameters (\alpha for s and r) which is harder to tune, while UN+Q is simpler. One of the major motivations of using UQ is to avoid the train-test mismatch, but from the experiments mentioned above, it performs even worse. This actually undermines the motivation. Additionally, the experiemnts are conducted only on one dataset, which makes the results less convincing. Hence, I would say that though applying UQ to encoder compression is new, the improvement of the paper is more or less marginal compared to previous work.
NIPS
Title Universally Quantized Neural Compression Abstract A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization. 1 Introduction Over the last four years, deep learning research into lossy image compression has seen tremendous progress. End-to-end trained neural networks have gone from barely beating JPEG2000 [4] to outperforming the best manually designed compression schemes for images [36, 2]. Despite this success, many challenges remain before end-to-end trained compression becomes a viable alternative to more traditional codecs. Computational complexity, temporal inconsistencies, and perceptual metrics which are effective yet easy to optimize are some of the challenges facing neural networks. In this paper we focus on the issue of quantization. Practical lossy compression schemes rely on quantization to compute a discrete representation which can be transmitted digitally. But quantization is a non-differentiable operation and as such prevents us from optimizing encoders directly via backpropagation [33]. A common workaround is to replace quantization with a differentiable approximation during training but to use quantization at test time [e.g., 32, 4, 1]. However, it is unclear how much this mismatch between training and test phases is hurting performance. A promising alternative is to get rid of quantization altogether [15]. That is, to communicate information in a differentiable manner both at training and at test time. At the heart of this approach is the insight that we can communicate a sample from a possibly continuous distribution using a finite number of bits, also known as the reverse Shannon theorem [8]. However, existing realizations of this approach tend to be either computationally costly or statistically inefficient, that is, they require more bits than they transmit information. Here, we bridge the gap between the two approaches of dealing with quantization. A popular approximation for quantization is additive uniform noise [4, 5]. In Section 3.2, we show that additive uniform noise can be viewed as an instance of compression without quantization and describe a technique for implementing it at test time. Unlike other approaches to quantizationless compression, this technique is both statistically and computationally efficient. In Section 4.1, we show how to smoothly interpolate between uniform noise and hard quantization while maintaining differentiability. ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. We further show that it is possible to analytically integrate out noise when calculating gradients and in some cases drastically reduce their variance (Section 4.2). Finally, we evaluate our approach empirically in Section 5 and find that a better match between training and test phases leads to improved performance especially in models of lower complexity. 2 Related work Most prior work on end-to-end trained lossy compression optimizes a rate-distortion loss of the form log2 P (bf(x)e) + d(x, g(bf(x)e)). (1) Here, f is an encoder, g is a decoder, P is a probability mass function and they may all depend on parameters we want to optimize. The distortion d measures the discrepancy between inputs and reconstructions and the parameter > 0 controls the trade-off between it and the number of bits. The rounding function b·e used for quantization and the discreteness of P pose challenges for optimizing the encoder. Several papers have proposed methods to deal with quantization for end-to-end trained lossy compression. Toderici et al. [32] replaced rounding with stochastic rounding to the nearest integer. Theis et al. [31] applied hard quantization during both training and inference but used straight-through gradient estimates to obtain a training signal for the encoder. Agustsson et al. [1] used a smooth approximation of vector quantization that was annealed towards hard quantization during training. Most relevant for our work is the approach taken by Ballé et al. [4], who proposed to add uniform noise during training, log2 p(f(x) + u) + d(x, g(f(x) + u)), (2) as an approximation to rounding at test time. Here, p is a density and u is a sample of uniform noise drawn from U([ 0.5, 0.5)D). If the distortion is a mean-squared error, then this approach is equivalent to a variational autoencoder [25, 18] with a uniform encoder [5, 31]. Another line of research studies the simulation of noisy channels using a noiseless channel, that is, the reverse of channel coding. In particular, how can we communicate a sample z from a conditional distribution (the noisy channel), q(z | x), using as few bits as possible (the noiseless channel)? The reverse Shannon theorem of Bennett and Shor [8] shows that it is possible to communicate a sample using a number of bits not much larger than the mutual information between X and Z, I[X,Z]. Existing implementations of reverse channel coding operate on the same principle. First, a large number of samples zn is generated from a fixed distribution p. Importantly, this distribution does not depend on x and the same samples can be generated on both the sender’s and the receiver’s side using a shared source of randomness (for our purposes this would be a pseudorandom number generator with a fixed seed). One of these samples is then selected and its index n communicated digitally. The various methods differ in how this index is selected. Cuff [11] provided a constructive achievability proof for the mutual information bound using an approach which was later dubbed the likelihood encoder [12]. In this approach the index n is picked stochastically with a probability proportional to p(x | zn). An equivalent approach dubbed MIRACLE was later derived by Havasi et al. [15] using importance sampling. In contrast to Cuff and Song [12], Havasi et al. [15] considered communication of a single sample from q instead of a sequence of samples. MIRACLE also represents the first application of quantizationless compression in the context of neural networks. Originally designed for model compression, it was recently adapted to the task of lossy image compression [13]. An earlier but computationally more expensive method based on rejection sampling was described by Harsha et al. [14]. Li and Gamal [21] described a simple yet efficient approach. The authors proved that it uses at most I[X,Z] + log2(I[X,Z] + 1) + 4 (3) bits on average. To our knowledge, this is the lowest known upper bound on the bits required to communicate a single sample. The overhead is still significant if we want to communicate a small amount of information but becomes negligible as the mutual information increases. Finally, we will rely heavily on results on uniform dither and universal quantization [29, 37, 35] to communicate a sample from a uniform distribution (Section 3.2). Choi et al. [9] used universal quantization as a relaxation of hard quantization. However, universal quantization was used in a manner that still produced a non-differentiable loss, which the authors dealt with by using straightthrough gradient estimates [7]. In contrast, here we will use fully differentiable losses during training and use the same method of encoding at training and at test time. Roberts [27] applied universal quantization directly to grayscale pixels and found it lead to superior picture quality compared to quantization. 3 Compression without quantization Instead of approximating quantization or relying on straight-through gradient estimates, we would like to use a differentiable channel and thus eliminate any need for approximations during training. Existing methods to simulate a noisy channel qZ|x require simulating a number of random variables Zn ⇠ pZ which is exponential in DKL[qZ|x || pZ ] for every x we wish to communicate [e.g., 15]. Since the mutual information I[X,Z] is a lower bound on the average Kullback-Leibler divergence, this creates a dilemma. On the one hand, we would like to keep the divergence small to limit the computational cost. For example, by encoding blocks of coefficients (sometimes also referred to as “latents”) separately [15, 13]. On the other hand, the information transmitted should be large to keep the statistical overhead small (Equation 3). One might hope that more efficient algorithms exist which can quickly identify an index n without having to explicitly generate all samples. However, such an algorithm is not possible as it would allow us to efficiently sample distributions which are known to be hard to simulate even approximately (in terms of total variation distance, DTV) [22]. More precisely, we have the following lemma. Lemma 1. Consider an algorithm which receives a description of an arbitrary probability distribution q as input and is also given access to an unlimited number of i.i.d. random variables Zn ⇠ p. It outputs Z ⇠ q̃ such that its distribution is approximately q in the sense that DTV[q̃, q] 1/12. If RP 6= NP , then there is no such algorithm whose time complexity is polynomial in DKL[q || p]. A proof and details are provided in Appendix B. In order to design efficient algorithms for communicating samples, the lemma implies we need to make assumptions about the distributions involved. 3.1 Uniform noise channel A particularly simple channel is the additive uniform noise channel, Z = f(x) +U , U ⇠ U([ 0.5, 0.5)D). (4) Replacing quantization with uniform noise during training is a popular strategy for end-to-end trained compression [e.g., 4, 5, 36]. In the following, however, we are no longer going to view this as an approximation to quantization but as a differentiable channel for communicating information. The uniform noise channel turns out to be easy to simulate computationally and statistically efficiently. 3.2 Universal quantization For a fixed y 2 R, universal quantization is quantization with a random offset, by Ue+ U, U ⇠ U([ 0.5, 0.5)). (5) This form of quantization has the remarkable property of being equal in distribution to adding uniform noise directly [27, 29, 37]. That is, by Ue+ U ⇠ y + U 0, (6) where U 0 is another source of identical uniform noise. This property has made universal quantization a useful tool for studying quantization, especially in settings where quantization noise Y bY e is roughly uniform. Here, we are interested in it not as an approximation but as a way to simulate a differentiable channel for communicating information. At training time, we will add uniform noise as in prior work [4, 5]. For deployment, we propose to use universal quantization instead of switching to hard quantization, thereby eliminating the mismatch between training and test phases. If Y is a random variable representing a coefficient produced by a transform, the encoder calculates discrete K = bY Ue and transmits it to the decoder. The decoder has access to U and computes K +U . How many bits are required to encode K? Zamir and Feder [35] showed that the conditional entropy of K given U is H[K | U ] = I[Y, Y + U ] = h[Y + U ]. (7) This bound on the coding cost has two important properties. First, being equivalent to the differential entropy of Y + U means it is differentiable if the density of Y is differentiable. Second, the cost of transmitting K is equivalent to the amount of information gained by the decoder. In contrast to other methods for compression without quantization (Equation 3), the number of bits required is only bounded by the amount of information transmitted. In practice, we will use a model to approximate the distribution of Y + U from which the distribution of K can be derived, P (K = k | U = u) = pY+U (k + u). Here, pY+U is the same density that occurs in the loss in Equation 2. Another advantage of universal quantization over more general reverse channel coding schemes is that it is much more computationally efficient. Its computational complexity grows only linearly with the number of coefficients to be transmitted instead of exponentially with the number of bits. Universal quantization has previously been applied to neural networks using the same shift for all coefficients, Ui = Uj [9]. We note that this form of universal quantization is not equivalent to adding either dependent or independent noise during training. Adding dependent noise would not create an information bottleneck, since a single coefficient which is always zero could be used by the decoder to recover the noise and therefore the exact values of the other coefficients. In the following, we will always assume independent noise as in Equation 4. Generalizations to other forms of noise such as Gaussian noise are possible and are discussed in Appendix C. Here, we will focus on a simple uniform noise channel (Section 3.2) as frequently used in the neural compression literature [4, 5, 23, 36]. 4 Compression with quantization While the uniform noise channel has the advantage of being differentiable, there are still scenarios where we may want to use quantization. For instance, under some conditions universal quantization is known to be suboptimal with respect to mean squared error (MSE) [34, Theorem 5.5.1]. However, this assumes a fixed encoder and decoder. In the following, we show that quantization is a limiting case of universal quantization if we allow flexible encoders and decoders. Hence it is possible to recover any benefits quantization might have while maintaining a differentiable loss function. 4.1 Simulating quantization with uniform noise We first observe that applying rounding as the last step of an encoder and again as the first step of a decoder would eliminate the effects of any offset u 2 [ 0.5, 0.5), bbye+ ue = bye. (8) This suggests that we may be able to recover some of the benefits of hard quantization without sacrificing differentiability by using a smooth approximation to rounding, s(s(y) + u) ⇡ bye. (9) We are going to use the following function which is differentiable everywhere (Appendix C): s↵(y) = byc+ 1 2 tanh(↵r) tanh(↵/2) + 1 2 , where r = y byc 1 2 . (10) The function is visualized in Figure 1A. Its parameter ↵ controls the fidelity of the approximation: lim ↵!0 s↵(y) = y, lim ↵!1 s↵(y) = bye. (11) After observing a value z for random variable s↵(Y ) + U , we can do slightly better if our goal is to minimize the MSE of Y . Instead of soft rounding twice, the optimal reconstruction is obtained with r↵(s↵(y) + u), where r↵(z) = E[Y | s↵(Y ) + U = z]. (12) It is not difficult to see that p(y | z) / y 2 (s 1↵ (z 0.5), s 1↵ (z + 0.5)] p(y), (13) where evaluates to 1 if its argument is true and 0 otherwise. That is, the posterior over y is a truncated version of the prior distribution. If we assume that the prior is smooth enough to be approximately uniform in each interval, we have E[Y | s↵(Y ) + U = z] ⇡ s 1 ↵ (z 0.5) + s 1↵ (z + 0.5) 2 = s 1↵ (z 0.5) + 0.5. (14) where we have used that s↵(z + 1) = s↵(z) + 1. We will assume this form for r↵ going forward for which we still have that lim↵!1 r↵(s↵(y) + u) = bye, (15) that is, we recover hard quantization as a limiting case. Thus in cases where quantization is desirable, we can anneal ↵ towards hard quantization during training while still having a differentiable loss. Smooth approximations to quantization have been used previously though without the addition of noise [1]. Note that soft rounding without noise does not create a bottleneck since the function is invertible and the input coefficients can be fully recovered by the decoder. Thus, Equation 15 offers a more principled approach to approximating quantization. 4.2 Reducing the variance of gradients When ↵ is large, the derivatives of s↵ and r↵ tend to be close to zero with high probability and very large with low probability. This leads to gradients for the encoder with potentially large variance. To compensate we propose to analytically integrate out the uniform noise as follows. Let h : R ! R be a differentiable function and, as before, let U ⇠ U([ 0.5, 0.5)) be a uniform random variable. We are interested in computing the following derivative: d dy E[h(y + U)] = E d dy h(y + U) . (16) To get a low-variance estimate of the expectation’s derivative we could average over many samples of U . However, note that we also have d dy E[h(y + U)] = d dy Z y+0.5 y 0.5 h(y + u)du = h(y + 0.5) h(y 0.5). (17) That is, the gradient of the expectation can be computed analytically with finite differences. Furthermore, Equation 17 allows us to evaluate the derivative of the expectation even when h is not differentiable. Now consider the case where we apply h pointwise to a vector y +U with U ⇠ U([ 0.5, 0.5)D) followed by a multivariable function ` : RD ! R. Then @ @yi E [`(h(y +U))] = E @ @zi `(Z) Z=h(y+U) · @ @yi h(yi + Ui) (18) ⇡ E @ @zi `(Z) Z=h(y+U) · E @ @yi h(yi + Ui) (19) = E @ @zi `(Z) Z=h(y+U) · (h(yi + 0.5) h(yi 0.5)), (20) where the approximation in (19) is obtained by assuming the partial derivative @@zi `(Z) is uncorrelated with @@yih(yi + Ui). This would hold, for example, if ` were locally linear around h(y) such that its derivative is the same for any possible perturbed value h(y + u). Equation 20 corresponds to the following modification of backpropagation: the forward pass is computed in a standard manner (that is, evaluating `(h(y + u)) for a sampled instance u), but in the backward pass we replace the derivative @@yih(yi + ui) with its expected value, h(yi + 0.5) h(yi 0.5). Consider a model where soft-rounding follows the encoder, y = s↵(f(x)), and a factorial entropy model is used. The rate-distortion loss becomes P i E[log2 pi(yi + Ui)] + E [d(x, g(r↵(y +U)))] . (21) We can apply Equation 17 directly to the rate term to calculate the gradient of y (Figure 1B). For the distortion term we use Equation 20 where r↵ takes the role of h. Interestingly, for the softrounding function and its inverse the expected derivative takes the form of a straight-through gradient estimate [7]. That is, the expected derivative is always 1. Given a cumulative distribution cY for Y , the density of Z = s(Y ) + U can be shown to be ps(Y )+U (y) = cY (s 1(y) + 0.5) cY (s 1(y) 0.5). (22) We use this result to parametrize the density of Z (see Appendix E for details). Figure 1B illustrates such a model where Y is assumed to have a logistic distribution. 5 Experiments 5.1 Models We conduct experiments with two models: (a) a simple linear model and (b) a more complex model based on the hyperprior architecture proposed by Ballé et al. [6] and extended by Minnen et al. [23]. The linear model operates on 8x8 blocks similar to JPEG/JFIF [16]. It is implemented by setting the encoder f to be a convolution with a kernel size of 8, a stride of 8, and 192 output channels. The decoder g is set to the corresponding transposed convolution. Both are initialized independently with random orthogonal matrices [28]. For the density model we use the non-parametric model of Ballé et al. [6] but adjusted for soft-rounding (Appendix E). The hyperprior model is a much stronger model and is based on the (non-autoregressive) "Mean & Scale Hyperprior" architecture described by Minnen et al. [23]. Here, the coefficients produced by a neural encoder f , y = f(x), are mapped by a second encoder h to “hyper latents” v = h(y). Uniform noise is then applied to both sets of coefficients. A sample w = v + u1 is first transmitted and subsequently used to conditionally encode a sample z = y + u2. Finally, a neural decoder computes the reconstruction as x̂ = g(z). Following previous work, the conditional distribution is assumed to be Gaussian, p(y | w) = N (y;mµ(w),m (w)) , (23) where mµ and m are two neural networks. When integrating soft quantization into the architecture, we center the quantizer around the mean prediction mµ(w), y = s↵(f(x) mµ(w)), z = y + u2, x̂ = g(r↵(z) +mµ(w)). (24) and adjust the conditional density accordingly. This corresponds to transmitting the residual between y and the mean prediction mµ(w) (soft rounded) across the uniform noise channel. As for the linear model, we use a non-parametric model for the density of v. We consider the following three approaches for each model: Uniform Noise + Quantization: The model is trained with uniform noise but uses quantization for inference. This is the approach that is widely used in neural compression [e.g., 4, 5, 23, 36]. We refer to this setting as UN + Q or as the "test-time quantization baseline". Uniform Noise + Universal Quantization: Here the models use the uniform noise channel during training as well as for inference, eliminating the train-test mismatch. We refer to this setting as UN + UQ. As these models have the same training objective as UN + Q, we can train a single model and evaluate it for both settings. Uniform Noise + Universal Quantization + Soft Rounding: Here we integrate a soft quantizer (Section 4.1) into the uniform noise channel (both during training and at test time), recovering the potential benefits of quantization while maintaining the match between training and test phases using universal quantization. We refer to this setting as UN + UQ + SR. 5.2 Traininig The training examples are 256x256 pixel crops extracted from a set of 1M high resolution JPEG images collected from the internet. The images’ initial height and width ranges from 3,000 to 5,000 pixels but images were randomly resized such that the smaller dimension is between 533 and 1,200 pixels before taking crops. We optimized all models for mean squared error (MSE). The Adam optimizer [19] was applied for 2M steps with a batch size of 8 and a learning rate of 10 4 which is reduced to 10 5 after 1.6M steps. For the first 5,000 steps only the density models were trained and the learning rates of the encoder and decoder transforms were kept at zero. The training time was about 30 hours for the linear models and about 60 hours for the hyperprior models on an Nvidia V100 GPU. For the hyperprior models we set = 2i for i 2 { 6, · · · , 1} and decayed it by a factor of 110 after 200k steps. For the linear models we use slightly smaller = 0.4 · 2i and reduced it by a factor of 12 after 100k steps and again after 200k steps. For soft rounding we linearly annealed the parameter ↵ from 1 to 16 over the full 2M steps. At the end of training, ↵ is large enough that soft rounding gives near identical results to rounding. 5.3 Results We evaluate all models on the Kodak [20] dataset by computing the rate-distortion (RD) curve in terms of bits-per-pixel (bpp) versus peak signal-to-noise ratio (PSNR). In Figure 2A we show results for the linear model. When comparing the UN + UQ model which uses universal quantization to the test-time quantization baseline UN + Q, we see that despite the train-test mismatch using quantization improves the RD-performance at test-time (hatched area). However, looking at UN + UQ + SR, we obtain an improvement in terms of RD performance (shaded area) over the test-time quantization baseline. In Figure 3A we can observe similar albeit weaker effects for the hyperprior model. There is again a performance gap between UN + Q and UN + UQ. Introducing soft rounding again improves the RD performance, outperforming the test-time quantization baseline at low bitrates. The smaller difference can be explained by the deeper networks’ ability to imitate functionality otherwise performed by soft-rounding. For example, r↵ has a denoising effect which a powerful enough decoder can absorb. In Figure 2B we illustrate the effect of using expected gradients on the linear model. We did not observe big differences when using the same ↵ with s↵ and r↵ (not shown). However, using s7 and r16 we saw significant speedups in convergence and gaps in performance at high bitrates. For the hyperprior model we find expected gradients beneficial both in terms of performance and stability of training. In Figure 3B, we consider the UN + UQ + SR setting using either the linear schedule ↵ = 1, · · · , 16 or alternatively a fixed ↵ 2 {7, 13}, with and without expected gradients. We found that for ↵ > 7, the models would not train stably (especially at the higher bitrates) without expected gradients (both when annealing ↵ and for fixed ↵ = 13) and obtained poorer performance. In summary, for both the linear model and the hyperprior we observe that despite a train-test mismatch, the effect of the quantization is positive (UN + Q vs UN + UQ), but that further improvements can be gained by introducing soft rounding (UN + UQ + SR) into the uniform noise channel. Furthermore we find that expected gradients are helpful to speed up convergence and stabilize training. 6 Conclusion The possibility to efficiently communicate samples has only recently been studied in information theory [10, 11] and even more recently been recognized in machine learning [15, 13]. We connected this literature to an old idea from rate-distortion theory, uniformly dithered or universal quantization [27, 29, 37, 35], which allows us to efficiently communicate a sample from a uniform distribution. Unlike more general approaches, universal quantization is computationally efficient. This is only possible because it considers a constrained class of distributions, as shown in Lemma 1. Intriguingly, universal quantization makes it possible to implement an approach at test time which was already popular for training neural networks [4]. This allowed us to study and eliminate existing gaps between training and test losses. Furthermore, we showed that interpolating between the two approaches in a principled manner is possible using soft-rounding functions. For ease of training and evaluation our empirical findings were based on MSE. We found that already here a simple change can lead to improved performance, especially for models of low complexity. However, generative compression [26, 2] may benefit more strongly from compression without quantization. Theis et al. [31] showed that uniform noise and quantization can be perceptually very different, suggesting that adversarial and other perceptual training losses may be more sensitive to a mismatch between training and test phases. Roberts [27] found that replacing quantization with dithered quantization can improve picture quality when applied directly to graycale pixels. Similarly, we find that reconstructions of the linear model have visible blocking artefacts when using quantization, as would be expected given the model’s similarity to JPEG/JFIF [16]. In contrast, universal quantization masks the blocking artefacts almost completely at the expense of introducing grain (Appendix G). Finally, here we only studied one-dimensional uniform dither. Two generalizations are discussed in Appendix C and may provide additional advantages. We hope that our paper will inspire work into richer classes of distributions which are easy to communicate in a computationally efficient manner. Broader Impact Poor internet connectivity and high traffic costs are still a reality in many developing countries [3]. But also in developed countries internet connections are often poor due to congestion in crowded areas or insufficient mobile network coverage. By improving compression rates, neural compression has the potential to make information more broadly available. About 79% of global IP traffic is currently made up of videos [17]. This means that work on image and video compression in particular has the potential to impact a lot of people. Assigning fewer bits to one image is only possible by simultaneously assigning more bits to other images. Care needs to be taken to make sure that training sets are representative. Generative compression in particular bears the risk of misrepresenting content but is outside the scope of this paper. Acknowledgments We would like to thank Johannes Ballé for helpful discussions and valuable comments on this manuscript. This work was performed and funded by Google.
1. What is the main contribution of the paper regarding neural network compression? 2. What are the strengths of the proposed approach, particularly in its ability to enable differentiable loss functions? 3. What are the weaknesses of the paper, especially in terms of its empirical evaluation? 4. Do you have any concerns about the assumptions made in the paper, such as the bias of the approximation at equation 19? 5. How does the reviewer assess the novelty and practicality of the proposed approach compared to prior works in data and model compression?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Neural network based compressors usually apply additive uniform noise during training as a proxy for the quantization that is performed during test-time. This creates a mismatch between the training and testing phases. This work proposes to instead apply universal quantization at test time thus eliminating the mismatch between training and test phases while maintaining a differentiable loss function. It is based on the fact that adding uniform noise to an input x is equivalent to subtracting a uniform random variable from x, rounding the result and then adding the same uniform random variable back. As a result, by sharing a random seed across the encoder and decoder we can easily implement universal quantization for neural network based compressors. The authors show that this is instance of a more general problem of efficiently communicating samples, which is computationally hard without distributional assumptions, but simple and practical for the uniform noise case. While this framework bypasses the need for quantization, the authors argue that there are still scenarios where one may desire hard quantization. As direct rounding does not allow for gradient based optimisation, the authors propose to instead use a soft rounding function that has a hyperparameter alpha; small values for alpha make the soft-rounding behave like an identity and large values of alpha make the soft-rounding behave like a hard one. Instead of directly applying their soft-rounding function on an input y, which is invertible and can lead to memorisation in the decoder, the authors further add uniform noise to the soft rounded value and then perform an MSE optimal reconstruction of the original y. Finally, the authors note that the variance of the gradients of the soft rounding function can be high for large values of alpha, and they propose a way to “marginalize” the randomness for a part of the gradient expression, which empirically leads to more stable optimisation. The authors then evaluate both universal quantization and universal quantization with soft-rounding to the Kodak dataset with a simple Linear model as well as a more flexible Mean & Scale Hyperpior model from prior literature. The results show that in general additive uniform noise with test-time hard quantization works better that universal quantization, however by incorporating soft-rounding they are able to improve, albeit slightly, upon the additive noise + test-time quantization setting. Strengths This work has contributions in two main themes which are relevant (for a part of) the NeurIPS community; compression with and without quantization. For the first point, the authors present soft rounding which allows for gradients to flow through, along with an approximate marginalisation of the noise in the gradients that can reduce their variance. For the second point, the authors employ the concept of universal quantization, which is simple, practical and easy to implement in existing frameworks. Both of these contributions can be valuable for a broad range of research in both data as well as model compression. The application of universal quantization as well as the approximate marginalisation for the gradients are novel contributions. Weaknesses While this paper is solid from a theoretical standpoint, I find that the empirical evaluation / experimental results are lacking. The premise of the paper seemed to imply that closing the gap between the training and test time behaviour of the algorithm, would be beneficial but unfortunately this does not seem to be the case. Additive uniform noise together with test-time hard-quantization seems to be better than universal quantization, implying that the mismatch of the train and test phases is not detrimental performance wise. Furthermore, while the addition of quantization to the universal quantization procedure seems to close the gap and improve upon the vanilla setting, the improvements seem to be small for the larger models which are typically employed, which makes me wonder whether for sufficiently large architectures universal quantization + soft rounding is necessary. Furthermore, evaluation on other datasets, such as BSD100 or Urban100, would make the experimental section of this work stronger. Finally, I would appreciate if the authors could elaborate a bit on the bias of the approximation done at eq. 19; does this assumption hold in practice? You could for example check the errors of your expression to the one where you use eq. 18 and average over multiple samples from the the original gradient (ideally at multiple stages of training).
NIPS
Title Universally Quantized Neural Compression Abstract A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization. 1 Introduction Over the last four years, deep learning research into lossy image compression has seen tremendous progress. End-to-end trained neural networks have gone from barely beating JPEG2000 [4] to outperforming the best manually designed compression schemes for images [36, 2]. Despite this success, many challenges remain before end-to-end trained compression becomes a viable alternative to more traditional codecs. Computational complexity, temporal inconsistencies, and perceptual metrics which are effective yet easy to optimize are some of the challenges facing neural networks. In this paper we focus on the issue of quantization. Practical lossy compression schemes rely on quantization to compute a discrete representation which can be transmitted digitally. But quantization is a non-differentiable operation and as such prevents us from optimizing encoders directly via backpropagation [33]. A common workaround is to replace quantization with a differentiable approximation during training but to use quantization at test time [e.g., 32, 4, 1]. However, it is unclear how much this mismatch between training and test phases is hurting performance. A promising alternative is to get rid of quantization altogether [15]. That is, to communicate information in a differentiable manner both at training and at test time. At the heart of this approach is the insight that we can communicate a sample from a possibly continuous distribution using a finite number of bits, also known as the reverse Shannon theorem [8]. However, existing realizations of this approach tend to be either computationally costly or statistically inefficient, that is, they require more bits than they transmit information. Here, we bridge the gap between the two approaches of dealing with quantization. A popular approximation for quantization is additive uniform noise [4, 5]. In Section 3.2, we show that additive uniform noise can be viewed as an instance of compression without quantization and describe a technique for implementing it at test time. Unlike other approaches to quantizationless compression, this technique is both statistically and computationally efficient. In Section 4.1, we show how to smoothly interpolate between uniform noise and hard quantization while maintaining differentiability. ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. We further show that it is possible to analytically integrate out noise when calculating gradients and in some cases drastically reduce their variance (Section 4.2). Finally, we evaluate our approach empirically in Section 5 and find that a better match between training and test phases leads to improved performance especially in models of lower complexity. 2 Related work Most prior work on end-to-end trained lossy compression optimizes a rate-distortion loss of the form log2 P (bf(x)e) + d(x, g(bf(x)e)). (1) Here, f is an encoder, g is a decoder, P is a probability mass function and they may all depend on parameters we want to optimize. The distortion d measures the discrepancy between inputs and reconstructions and the parameter > 0 controls the trade-off between it and the number of bits. The rounding function b·e used for quantization and the discreteness of P pose challenges for optimizing the encoder. Several papers have proposed methods to deal with quantization for end-to-end trained lossy compression. Toderici et al. [32] replaced rounding with stochastic rounding to the nearest integer. Theis et al. [31] applied hard quantization during both training and inference but used straight-through gradient estimates to obtain a training signal for the encoder. Agustsson et al. [1] used a smooth approximation of vector quantization that was annealed towards hard quantization during training. Most relevant for our work is the approach taken by Ballé et al. [4], who proposed to add uniform noise during training, log2 p(f(x) + u) + d(x, g(f(x) + u)), (2) as an approximation to rounding at test time. Here, p is a density and u is a sample of uniform noise drawn from U([ 0.5, 0.5)D). If the distortion is a mean-squared error, then this approach is equivalent to a variational autoencoder [25, 18] with a uniform encoder [5, 31]. Another line of research studies the simulation of noisy channels using a noiseless channel, that is, the reverse of channel coding. In particular, how can we communicate a sample z from a conditional distribution (the noisy channel), q(z | x), using as few bits as possible (the noiseless channel)? The reverse Shannon theorem of Bennett and Shor [8] shows that it is possible to communicate a sample using a number of bits not much larger than the mutual information between X and Z, I[X,Z]. Existing implementations of reverse channel coding operate on the same principle. First, a large number of samples zn is generated from a fixed distribution p. Importantly, this distribution does not depend on x and the same samples can be generated on both the sender’s and the receiver’s side using a shared source of randomness (for our purposes this would be a pseudorandom number generator with a fixed seed). One of these samples is then selected and its index n communicated digitally. The various methods differ in how this index is selected. Cuff [11] provided a constructive achievability proof for the mutual information bound using an approach which was later dubbed the likelihood encoder [12]. In this approach the index n is picked stochastically with a probability proportional to p(x | zn). An equivalent approach dubbed MIRACLE was later derived by Havasi et al. [15] using importance sampling. In contrast to Cuff and Song [12], Havasi et al. [15] considered communication of a single sample from q instead of a sequence of samples. MIRACLE also represents the first application of quantizationless compression in the context of neural networks. Originally designed for model compression, it was recently adapted to the task of lossy image compression [13]. An earlier but computationally more expensive method based on rejection sampling was described by Harsha et al. [14]. Li and Gamal [21] described a simple yet efficient approach. The authors proved that it uses at most I[X,Z] + log2(I[X,Z] + 1) + 4 (3) bits on average. To our knowledge, this is the lowest known upper bound on the bits required to communicate a single sample. The overhead is still significant if we want to communicate a small amount of information but becomes negligible as the mutual information increases. Finally, we will rely heavily on results on uniform dither and universal quantization [29, 37, 35] to communicate a sample from a uniform distribution (Section 3.2). Choi et al. [9] used universal quantization as a relaxation of hard quantization. However, universal quantization was used in a manner that still produced a non-differentiable loss, which the authors dealt with by using straightthrough gradient estimates [7]. In contrast, here we will use fully differentiable losses during training and use the same method of encoding at training and at test time. Roberts [27] applied universal quantization directly to grayscale pixels and found it lead to superior picture quality compared to quantization. 3 Compression without quantization Instead of approximating quantization or relying on straight-through gradient estimates, we would like to use a differentiable channel and thus eliminate any need for approximations during training. Existing methods to simulate a noisy channel qZ|x require simulating a number of random variables Zn ⇠ pZ which is exponential in DKL[qZ|x || pZ ] for every x we wish to communicate [e.g., 15]. Since the mutual information I[X,Z] is a lower bound on the average Kullback-Leibler divergence, this creates a dilemma. On the one hand, we would like to keep the divergence small to limit the computational cost. For example, by encoding blocks of coefficients (sometimes also referred to as “latents”) separately [15, 13]. On the other hand, the information transmitted should be large to keep the statistical overhead small (Equation 3). One might hope that more efficient algorithms exist which can quickly identify an index n without having to explicitly generate all samples. However, such an algorithm is not possible as it would allow us to efficiently sample distributions which are known to be hard to simulate even approximately (in terms of total variation distance, DTV) [22]. More precisely, we have the following lemma. Lemma 1. Consider an algorithm which receives a description of an arbitrary probability distribution q as input and is also given access to an unlimited number of i.i.d. random variables Zn ⇠ p. It outputs Z ⇠ q̃ such that its distribution is approximately q in the sense that DTV[q̃, q] 1/12. If RP 6= NP , then there is no such algorithm whose time complexity is polynomial in DKL[q || p]. A proof and details are provided in Appendix B. In order to design efficient algorithms for communicating samples, the lemma implies we need to make assumptions about the distributions involved. 3.1 Uniform noise channel A particularly simple channel is the additive uniform noise channel, Z = f(x) +U , U ⇠ U([ 0.5, 0.5)D). (4) Replacing quantization with uniform noise during training is a popular strategy for end-to-end trained compression [e.g., 4, 5, 36]. In the following, however, we are no longer going to view this as an approximation to quantization but as a differentiable channel for communicating information. The uniform noise channel turns out to be easy to simulate computationally and statistically efficiently. 3.2 Universal quantization For a fixed y 2 R, universal quantization is quantization with a random offset, by Ue+ U, U ⇠ U([ 0.5, 0.5)). (5) This form of quantization has the remarkable property of being equal in distribution to adding uniform noise directly [27, 29, 37]. That is, by Ue+ U ⇠ y + U 0, (6) where U 0 is another source of identical uniform noise. This property has made universal quantization a useful tool for studying quantization, especially in settings where quantization noise Y bY e is roughly uniform. Here, we are interested in it not as an approximation but as a way to simulate a differentiable channel for communicating information. At training time, we will add uniform noise as in prior work [4, 5]. For deployment, we propose to use universal quantization instead of switching to hard quantization, thereby eliminating the mismatch between training and test phases. If Y is a random variable representing a coefficient produced by a transform, the encoder calculates discrete K = bY Ue and transmits it to the decoder. The decoder has access to U and computes K +U . How many bits are required to encode K? Zamir and Feder [35] showed that the conditional entropy of K given U is H[K | U ] = I[Y, Y + U ] = h[Y + U ]. (7) This bound on the coding cost has two important properties. First, being equivalent to the differential entropy of Y + U means it is differentiable if the density of Y is differentiable. Second, the cost of transmitting K is equivalent to the amount of information gained by the decoder. In contrast to other methods for compression without quantization (Equation 3), the number of bits required is only bounded by the amount of information transmitted. In practice, we will use a model to approximate the distribution of Y + U from which the distribution of K can be derived, P (K = k | U = u) = pY+U (k + u). Here, pY+U is the same density that occurs in the loss in Equation 2. Another advantage of universal quantization over more general reverse channel coding schemes is that it is much more computationally efficient. Its computational complexity grows only linearly with the number of coefficients to be transmitted instead of exponentially with the number of bits. Universal quantization has previously been applied to neural networks using the same shift for all coefficients, Ui = Uj [9]. We note that this form of universal quantization is not equivalent to adding either dependent or independent noise during training. Adding dependent noise would not create an information bottleneck, since a single coefficient which is always zero could be used by the decoder to recover the noise and therefore the exact values of the other coefficients. In the following, we will always assume independent noise as in Equation 4. Generalizations to other forms of noise such as Gaussian noise are possible and are discussed in Appendix C. Here, we will focus on a simple uniform noise channel (Section 3.2) as frequently used in the neural compression literature [4, 5, 23, 36]. 4 Compression with quantization While the uniform noise channel has the advantage of being differentiable, there are still scenarios where we may want to use quantization. For instance, under some conditions universal quantization is known to be suboptimal with respect to mean squared error (MSE) [34, Theorem 5.5.1]. However, this assumes a fixed encoder and decoder. In the following, we show that quantization is a limiting case of universal quantization if we allow flexible encoders and decoders. Hence it is possible to recover any benefits quantization might have while maintaining a differentiable loss function. 4.1 Simulating quantization with uniform noise We first observe that applying rounding as the last step of an encoder and again as the first step of a decoder would eliminate the effects of any offset u 2 [ 0.5, 0.5), bbye+ ue = bye. (8) This suggests that we may be able to recover some of the benefits of hard quantization without sacrificing differentiability by using a smooth approximation to rounding, s(s(y) + u) ⇡ bye. (9) We are going to use the following function which is differentiable everywhere (Appendix C): s↵(y) = byc+ 1 2 tanh(↵r) tanh(↵/2) + 1 2 , where r = y byc 1 2 . (10) The function is visualized in Figure 1A. Its parameter ↵ controls the fidelity of the approximation: lim ↵!0 s↵(y) = y, lim ↵!1 s↵(y) = bye. (11) After observing a value z for random variable s↵(Y ) + U , we can do slightly better if our goal is to minimize the MSE of Y . Instead of soft rounding twice, the optimal reconstruction is obtained with r↵(s↵(y) + u), where r↵(z) = E[Y | s↵(Y ) + U = z]. (12) It is not difficult to see that p(y | z) / y 2 (s 1↵ (z 0.5), s 1↵ (z + 0.5)] p(y), (13) where evaluates to 1 if its argument is true and 0 otherwise. That is, the posterior over y is a truncated version of the prior distribution. If we assume that the prior is smooth enough to be approximately uniform in each interval, we have E[Y | s↵(Y ) + U = z] ⇡ s 1 ↵ (z 0.5) + s 1↵ (z + 0.5) 2 = s 1↵ (z 0.5) + 0.5. (14) where we have used that s↵(z + 1) = s↵(z) + 1. We will assume this form for r↵ going forward for which we still have that lim↵!1 r↵(s↵(y) + u) = bye, (15) that is, we recover hard quantization as a limiting case. Thus in cases where quantization is desirable, we can anneal ↵ towards hard quantization during training while still having a differentiable loss. Smooth approximations to quantization have been used previously though without the addition of noise [1]. Note that soft rounding without noise does not create a bottleneck since the function is invertible and the input coefficients can be fully recovered by the decoder. Thus, Equation 15 offers a more principled approach to approximating quantization. 4.2 Reducing the variance of gradients When ↵ is large, the derivatives of s↵ and r↵ tend to be close to zero with high probability and very large with low probability. This leads to gradients for the encoder with potentially large variance. To compensate we propose to analytically integrate out the uniform noise as follows. Let h : R ! R be a differentiable function and, as before, let U ⇠ U([ 0.5, 0.5)) be a uniform random variable. We are interested in computing the following derivative: d dy E[h(y + U)] = E d dy h(y + U) . (16) To get a low-variance estimate of the expectation’s derivative we could average over many samples of U . However, note that we also have d dy E[h(y + U)] = d dy Z y+0.5 y 0.5 h(y + u)du = h(y + 0.5) h(y 0.5). (17) That is, the gradient of the expectation can be computed analytically with finite differences. Furthermore, Equation 17 allows us to evaluate the derivative of the expectation even when h is not differentiable. Now consider the case where we apply h pointwise to a vector y +U with U ⇠ U([ 0.5, 0.5)D) followed by a multivariable function ` : RD ! R. Then @ @yi E [`(h(y +U))] = E @ @zi `(Z) Z=h(y+U) · @ @yi h(yi + Ui) (18) ⇡ E @ @zi `(Z) Z=h(y+U) · E @ @yi h(yi + Ui) (19) = E @ @zi `(Z) Z=h(y+U) · (h(yi + 0.5) h(yi 0.5)), (20) where the approximation in (19) is obtained by assuming the partial derivative @@zi `(Z) is uncorrelated with @@yih(yi + Ui). This would hold, for example, if ` were locally linear around h(y) such that its derivative is the same for any possible perturbed value h(y + u). Equation 20 corresponds to the following modification of backpropagation: the forward pass is computed in a standard manner (that is, evaluating `(h(y + u)) for a sampled instance u), but in the backward pass we replace the derivative @@yih(yi + ui) with its expected value, h(yi + 0.5) h(yi 0.5). Consider a model where soft-rounding follows the encoder, y = s↵(f(x)), and a factorial entropy model is used. The rate-distortion loss becomes P i E[log2 pi(yi + Ui)] + E [d(x, g(r↵(y +U)))] . (21) We can apply Equation 17 directly to the rate term to calculate the gradient of y (Figure 1B). For the distortion term we use Equation 20 where r↵ takes the role of h. Interestingly, for the softrounding function and its inverse the expected derivative takes the form of a straight-through gradient estimate [7]. That is, the expected derivative is always 1. Given a cumulative distribution cY for Y , the density of Z = s(Y ) + U can be shown to be ps(Y )+U (y) = cY (s 1(y) + 0.5) cY (s 1(y) 0.5). (22) We use this result to parametrize the density of Z (see Appendix E for details). Figure 1B illustrates such a model where Y is assumed to have a logistic distribution. 5 Experiments 5.1 Models We conduct experiments with two models: (a) a simple linear model and (b) a more complex model based on the hyperprior architecture proposed by Ballé et al. [6] and extended by Minnen et al. [23]. The linear model operates on 8x8 blocks similar to JPEG/JFIF [16]. It is implemented by setting the encoder f to be a convolution with a kernel size of 8, a stride of 8, and 192 output channels. The decoder g is set to the corresponding transposed convolution. Both are initialized independently with random orthogonal matrices [28]. For the density model we use the non-parametric model of Ballé et al. [6] but adjusted for soft-rounding (Appendix E). The hyperprior model is a much stronger model and is based on the (non-autoregressive) "Mean & Scale Hyperprior" architecture described by Minnen et al. [23]. Here, the coefficients produced by a neural encoder f , y = f(x), are mapped by a second encoder h to “hyper latents” v = h(y). Uniform noise is then applied to both sets of coefficients. A sample w = v + u1 is first transmitted and subsequently used to conditionally encode a sample z = y + u2. Finally, a neural decoder computes the reconstruction as x̂ = g(z). Following previous work, the conditional distribution is assumed to be Gaussian, p(y | w) = N (y;mµ(w),m (w)) , (23) where mµ and m are two neural networks. When integrating soft quantization into the architecture, we center the quantizer around the mean prediction mµ(w), y = s↵(f(x) mµ(w)), z = y + u2, x̂ = g(r↵(z) +mµ(w)). (24) and adjust the conditional density accordingly. This corresponds to transmitting the residual between y and the mean prediction mµ(w) (soft rounded) across the uniform noise channel. As for the linear model, we use a non-parametric model for the density of v. We consider the following three approaches for each model: Uniform Noise + Quantization: The model is trained with uniform noise but uses quantization for inference. This is the approach that is widely used in neural compression [e.g., 4, 5, 23, 36]. We refer to this setting as UN + Q or as the "test-time quantization baseline". Uniform Noise + Universal Quantization: Here the models use the uniform noise channel during training as well as for inference, eliminating the train-test mismatch. We refer to this setting as UN + UQ. As these models have the same training objective as UN + Q, we can train a single model and evaluate it for both settings. Uniform Noise + Universal Quantization + Soft Rounding: Here we integrate a soft quantizer (Section 4.1) into the uniform noise channel (both during training and at test time), recovering the potential benefits of quantization while maintaining the match between training and test phases using universal quantization. We refer to this setting as UN + UQ + SR. 5.2 Traininig The training examples are 256x256 pixel crops extracted from a set of 1M high resolution JPEG images collected from the internet. The images’ initial height and width ranges from 3,000 to 5,000 pixels but images were randomly resized such that the smaller dimension is between 533 and 1,200 pixels before taking crops. We optimized all models for mean squared error (MSE). The Adam optimizer [19] was applied for 2M steps with a batch size of 8 and a learning rate of 10 4 which is reduced to 10 5 after 1.6M steps. For the first 5,000 steps only the density models were trained and the learning rates of the encoder and decoder transforms were kept at zero. The training time was about 30 hours for the linear models and about 60 hours for the hyperprior models on an Nvidia V100 GPU. For the hyperprior models we set = 2i for i 2 { 6, · · · , 1} and decayed it by a factor of 110 after 200k steps. For the linear models we use slightly smaller = 0.4 · 2i and reduced it by a factor of 12 after 100k steps and again after 200k steps. For soft rounding we linearly annealed the parameter ↵ from 1 to 16 over the full 2M steps. At the end of training, ↵ is large enough that soft rounding gives near identical results to rounding. 5.3 Results We evaluate all models on the Kodak [20] dataset by computing the rate-distortion (RD) curve in terms of bits-per-pixel (bpp) versus peak signal-to-noise ratio (PSNR). In Figure 2A we show results for the linear model. When comparing the UN + UQ model which uses universal quantization to the test-time quantization baseline UN + Q, we see that despite the train-test mismatch using quantization improves the RD-performance at test-time (hatched area). However, looking at UN + UQ + SR, we obtain an improvement in terms of RD performance (shaded area) over the test-time quantization baseline. In Figure 3A we can observe similar albeit weaker effects for the hyperprior model. There is again a performance gap between UN + Q and UN + UQ. Introducing soft rounding again improves the RD performance, outperforming the test-time quantization baseline at low bitrates. The smaller difference can be explained by the deeper networks’ ability to imitate functionality otherwise performed by soft-rounding. For example, r↵ has a denoising effect which a powerful enough decoder can absorb. In Figure 2B we illustrate the effect of using expected gradients on the linear model. We did not observe big differences when using the same ↵ with s↵ and r↵ (not shown). However, using s7 and r16 we saw significant speedups in convergence and gaps in performance at high bitrates. For the hyperprior model we find expected gradients beneficial both in terms of performance and stability of training. In Figure 3B, we consider the UN + UQ + SR setting using either the linear schedule ↵ = 1, · · · , 16 or alternatively a fixed ↵ 2 {7, 13}, with and without expected gradients. We found that for ↵ > 7, the models would not train stably (especially at the higher bitrates) without expected gradients (both when annealing ↵ and for fixed ↵ = 13) and obtained poorer performance. In summary, for both the linear model and the hyperprior we observe that despite a train-test mismatch, the effect of the quantization is positive (UN + Q vs UN + UQ), but that further improvements can be gained by introducing soft rounding (UN + UQ + SR) into the uniform noise channel. Furthermore we find that expected gradients are helpful to speed up convergence and stabilize training. 6 Conclusion The possibility to efficiently communicate samples has only recently been studied in information theory [10, 11] and even more recently been recognized in machine learning [15, 13]. We connected this literature to an old idea from rate-distortion theory, uniformly dithered or universal quantization [27, 29, 37, 35], which allows us to efficiently communicate a sample from a uniform distribution. Unlike more general approaches, universal quantization is computationally efficient. This is only possible because it considers a constrained class of distributions, as shown in Lemma 1. Intriguingly, universal quantization makes it possible to implement an approach at test time which was already popular for training neural networks [4]. This allowed us to study and eliminate existing gaps between training and test losses. Furthermore, we showed that interpolating between the two approaches in a principled manner is possible using soft-rounding functions. For ease of training and evaluation our empirical findings were based on MSE. We found that already here a simple change can lead to improved performance, especially for models of low complexity. However, generative compression [26, 2] may benefit more strongly from compression without quantization. Theis et al. [31] showed that uniform noise and quantization can be perceptually very different, suggesting that adversarial and other perceptual training losses may be more sensitive to a mismatch between training and test phases. Roberts [27] found that replacing quantization with dithered quantization can improve picture quality when applied directly to graycale pixels. Similarly, we find that reconstructions of the linear model have visible blocking artefacts when using quantization, as would be expected given the model’s similarity to JPEG/JFIF [16]. In contrast, universal quantization masks the blocking artefacts almost completely at the expense of introducing grain (Appendix G). Finally, here we only studied one-dimensional uniform dither. Two generalizations are discussed in Appendix C and may provide additional advantages. We hope that our paper will inspire work into richer classes of distributions which are easy to communicate in a computationally efficient manner. Broader Impact Poor internet connectivity and high traffic costs are still a reality in many developing countries [3]. But also in developed countries internet connections are often poor due to congestion in crowded areas or insufficient mobile network coverage. By improving compression rates, neural compression has the potential to make information more broadly available. About 79% of global IP traffic is currently made up of videos [17]. This means that work on image and video compression in particular has the potential to impact a lot of people. Assigning fewer bits to one image is only possible by simultaneously assigning more bits to other images. Care needs to be taken to make sure that training sets are representative. Generative compression in particular bears the risk of misrepresenting content but is outside the scope of this paper. Acknowledgments We would like to thank Johannes Ballé for helpful discussions and valuable comments on this manuscript. This work was performed and funded by Google.
1. What are the main contributions of the paper regarding handling quantization in learned systems? 2. What are the strengths of the paper, particularly in building upon recent advances in deep CNN compression? 3. What are the weaknesses of the paper, especially regarding the performance inconsistency between UN+UQ and UN+Q? 4. Do you have any questions or concerns regarding the UQ methodology and its potential practical value? 5. How does the reviewer assess the focus and significance of the paper's content within the NeurIPS community?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Authors describe two methods for handling quantization (a discontinuous operation) in learned systems, by 1) implementing a form of Ziv's Universal Quantization for transmitting noisy continuous values, and 2) introducing smoothed (and thus differentiable) quantizers. They examine performance of the first, and the combination of the two, on a simple linear coder, and on one recently published (Minnen et al, 2018). Strengths Focused and well-written, the paper builds on recent advances in deep CNN compression, in particular Balle et al 2017 and Minnen etal 2018. Those papers introduced a uniform noise approximation to the quantizer, so as to allow for a continuous and differentiable rate+distortion loss function during training, but then reverted to quantization for testing. Here the authors make use of Ziv's result to provide a direct encoding of the uniform noise values, thus allowing the test phase to be fully consistent with the training. i think this is a nice contribution (and was happy to learn about Ziv's result, which I'd not seen). The use, in addition, of a "softened" differentiable quantizer (as proposed in Agustsson 2019) leads to better performance. Weaknesses * I was surprised to see, after the authors touted advantages of using a consistent training and test implementations, that the results of the (UN+UQ) system were significantly worse than those of the "inconsistent" solution introduced by Balle et al 2017 (UN+Q). Only when the softened quantizer is added (UN+UQ+SR) do we see a relatively small improvement. Why? Are there potential ways to improve this? * This makes one wonder about how much the UQ noise matters. In particular, it would be instructive to see a comparison to (UN+SR). Given the previous comment, one might suspect this would lead to even better performance - and thus that the UQ methodology, despite its mathematical interest, is not of practical value. * Although I think it's important and interesting, this is a pretty heavy and narrowly-focused topic for the NeurIPS community.
NIPS
Title Universally Quantized Neural Compression Abstract A popular approach to learning encoders for lossy compression is to use additive uniform noise during training as a differentiable approximation to test-time quantization. We demonstrate that a uniform noise channel can also be implemented at test time using universal quantization (Ziv, 1985). This allows us to eliminate the mismatch between training and test phases while maintaining a completely differentiable loss function. Implementing the uniform noise channel is a special case of the more general problem of communicating a sample, which we prove is computationally hard if we do not make assumptions about its distribution. However, the uniform special case is efficient as well as easy to implement and thus of great interest from a practical point of view. Finally, we show that quantization can be obtained as a limiting case of a soft quantizer applied to the uniform noise channel, bridging compression with and without quantization. 1 Introduction Over the last four years, deep learning research into lossy image compression has seen tremendous progress. End-to-end trained neural networks have gone from barely beating JPEG2000 [4] to outperforming the best manually designed compression schemes for images [36, 2]. Despite this success, many challenges remain before end-to-end trained compression becomes a viable alternative to more traditional codecs. Computational complexity, temporal inconsistencies, and perceptual metrics which are effective yet easy to optimize are some of the challenges facing neural networks. In this paper we focus on the issue of quantization. Practical lossy compression schemes rely on quantization to compute a discrete representation which can be transmitted digitally. But quantization is a non-differentiable operation and as such prevents us from optimizing encoders directly via backpropagation [33]. A common workaround is to replace quantization with a differentiable approximation during training but to use quantization at test time [e.g., 32, 4, 1]. However, it is unclear how much this mismatch between training and test phases is hurting performance. A promising alternative is to get rid of quantization altogether [15]. That is, to communicate information in a differentiable manner both at training and at test time. At the heart of this approach is the insight that we can communicate a sample from a possibly continuous distribution using a finite number of bits, also known as the reverse Shannon theorem [8]. However, existing realizations of this approach tend to be either computationally costly or statistically inefficient, that is, they require more bits than they transmit information. Here, we bridge the gap between the two approaches of dealing with quantization. A popular approximation for quantization is additive uniform noise [4, 5]. In Section 3.2, we show that additive uniform noise can be viewed as an instance of compression without quantization and describe a technique for implementing it at test time. Unlike other approaches to quantizationless compression, this technique is both statistically and computationally efficient. In Section 4.1, we show how to smoothly interpolate between uniform noise and hard quantization while maintaining differentiability. ⇤Equal contribution 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. We further show that it is possible to analytically integrate out noise when calculating gradients and in some cases drastically reduce their variance (Section 4.2). Finally, we evaluate our approach empirically in Section 5 and find that a better match between training and test phases leads to improved performance especially in models of lower complexity. 2 Related work Most prior work on end-to-end trained lossy compression optimizes a rate-distortion loss of the form log2 P (bf(x)e) + d(x, g(bf(x)e)). (1) Here, f is an encoder, g is a decoder, P is a probability mass function and they may all depend on parameters we want to optimize. The distortion d measures the discrepancy between inputs and reconstructions and the parameter > 0 controls the trade-off between it and the number of bits. The rounding function b·e used for quantization and the discreteness of P pose challenges for optimizing the encoder. Several papers have proposed methods to deal with quantization for end-to-end trained lossy compression. Toderici et al. [32] replaced rounding with stochastic rounding to the nearest integer. Theis et al. [31] applied hard quantization during both training and inference but used straight-through gradient estimates to obtain a training signal for the encoder. Agustsson et al. [1] used a smooth approximation of vector quantization that was annealed towards hard quantization during training. Most relevant for our work is the approach taken by Ballé et al. [4], who proposed to add uniform noise during training, log2 p(f(x) + u) + d(x, g(f(x) + u)), (2) as an approximation to rounding at test time. Here, p is a density and u is a sample of uniform noise drawn from U([ 0.5, 0.5)D). If the distortion is a mean-squared error, then this approach is equivalent to a variational autoencoder [25, 18] with a uniform encoder [5, 31]. Another line of research studies the simulation of noisy channels using a noiseless channel, that is, the reverse of channel coding. In particular, how can we communicate a sample z from a conditional distribution (the noisy channel), q(z | x), using as few bits as possible (the noiseless channel)? The reverse Shannon theorem of Bennett and Shor [8] shows that it is possible to communicate a sample using a number of bits not much larger than the mutual information between X and Z, I[X,Z]. Existing implementations of reverse channel coding operate on the same principle. First, a large number of samples zn is generated from a fixed distribution p. Importantly, this distribution does not depend on x and the same samples can be generated on both the sender’s and the receiver’s side using a shared source of randomness (for our purposes this would be a pseudorandom number generator with a fixed seed). One of these samples is then selected and its index n communicated digitally. The various methods differ in how this index is selected. Cuff [11] provided a constructive achievability proof for the mutual information bound using an approach which was later dubbed the likelihood encoder [12]. In this approach the index n is picked stochastically with a probability proportional to p(x | zn). An equivalent approach dubbed MIRACLE was later derived by Havasi et al. [15] using importance sampling. In contrast to Cuff and Song [12], Havasi et al. [15] considered communication of a single sample from q instead of a sequence of samples. MIRACLE also represents the first application of quantizationless compression in the context of neural networks. Originally designed for model compression, it was recently adapted to the task of lossy image compression [13]. An earlier but computationally more expensive method based on rejection sampling was described by Harsha et al. [14]. Li and Gamal [21] described a simple yet efficient approach. The authors proved that it uses at most I[X,Z] + log2(I[X,Z] + 1) + 4 (3) bits on average. To our knowledge, this is the lowest known upper bound on the bits required to communicate a single sample. The overhead is still significant if we want to communicate a small amount of information but becomes negligible as the mutual information increases. Finally, we will rely heavily on results on uniform dither and universal quantization [29, 37, 35] to communicate a sample from a uniform distribution (Section 3.2). Choi et al. [9] used universal quantization as a relaxation of hard quantization. However, universal quantization was used in a manner that still produced a non-differentiable loss, which the authors dealt with by using straightthrough gradient estimates [7]. In contrast, here we will use fully differentiable losses during training and use the same method of encoding at training and at test time. Roberts [27] applied universal quantization directly to grayscale pixels and found it lead to superior picture quality compared to quantization. 3 Compression without quantization Instead of approximating quantization or relying on straight-through gradient estimates, we would like to use a differentiable channel and thus eliminate any need for approximations during training. Existing methods to simulate a noisy channel qZ|x require simulating a number of random variables Zn ⇠ pZ which is exponential in DKL[qZ|x || pZ ] for every x we wish to communicate [e.g., 15]. Since the mutual information I[X,Z] is a lower bound on the average Kullback-Leibler divergence, this creates a dilemma. On the one hand, we would like to keep the divergence small to limit the computational cost. For example, by encoding blocks of coefficients (sometimes also referred to as “latents”) separately [15, 13]. On the other hand, the information transmitted should be large to keep the statistical overhead small (Equation 3). One might hope that more efficient algorithms exist which can quickly identify an index n without having to explicitly generate all samples. However, such an algorithm is not possible as it would allow us to efficiently sample distributions which are known to be hard to simulate even approximately (in terms of total variation distance, DTV) [22]. More precisely, we have the following lemma. Lemma 1. Consider an algorithm which receives a description of an arbitrary probability distribution q as input and is also given access to an unlimited number of i.i.d. random variables Zn ⇠ p. It outputs Z ⇠ q̃ such that its distribution is approximately q in the sense that DTV[q̃, q] 1/12. If RP 6= NP , then there is no such algorithm whose time complexity is polynomial in DKL[q || p]. A proof and details are provided in Appendix B. In order to design efficient algorithms for communicating samples, the lemma implies we need to make assumptions about the distributions involved. 3.1 Uniform noise channel A particularly simple channel is the additive uniform noise channel, Z = f(x) +U , U ⇠ U([ 0.5, 0.5)D). (4) Replacing quantization with uniform noise during training is a popular strategy for end-to-end trained compression [e.g., 4, 5, 36]. In the following, however, we are no longer going to view this as an approximation to quantization but as a differentiable channel for communicating information. The uniform noise channel turns out to be easy to simulate computationally and statistically efficiently. 3.2 Universal quantization For a fixed y 2 R, universal quantization is quantization with a random offset, by Ue+ U, U ⇠ U([ 0.5, 0.5)). (5) This form of quantization has the remarkable property of being equal in distribution to adding uniform noise directly [27, 29, 37]. That is, by Ue+ U ⇠ y + U 0, (6) where U 0 is another source of identical uniform noise. This property has made universal quantization a useful tool for studying quantization, especially in settings where quantization noise Y bY e is roughly uniform. Here, we are interested in it not as an approximation but as a way to simulate a differentiable channel for communicating information. At training time, we will add uniform noise as in prior work [4, 5]. For deployment, we propose to use universal quantization instead of switching to hard quantization, thereby eliminating the mismatch between training and test phases. If Y is a random variable representing a coefficient produced by a transform, the encoder calculates discrete K = bY Ue and transmits it to the decoder. The decoder has access to U and computes K +U . How many bits are required to encode K? Zamir and Feder [35] showed that the conditional entropy of K given U is H[K | U ] = I[Y, Y + U ] = h[Y + U ]. (7) This bound on the coding cost has two important properties. First, being equivalent to the differential entropy of Y + U means it is differentiable if the density of Y is differentiable. Second, the cost of transmitting K is equivalent to the amount of information gained by the decoder. In contrast to other methods for compression without quantization (Equation 3), the number of bits required is only bounded by the amount of information transmitted. In practice, we will use a model to approximate the distribution of Y + U from which the distribution of K can be derived, P (K = k | U = u) = pY+U (k + u). Here, pY+U is the same density that occurs in the loss in Equation 2. Another advantage of universal quantization over more general reverse channel coding schemes is that it is much more computationally efficient. Its computational complexity grows only linearly with the number of coefficients to be transmitted instead of exponentially with the number of bits. Universal quantization has previously been applied to neural networks using the same shift for all coefficients, Ui = Uj [9]. We note that this form of universal quantization is not equivalent to adding either dependent or independent noise during training. Adding dependent noise would not create an information bottleneck, since a single coefficient which is always zero could be used by the decoder to recover the noise and therefore the exact values of the other coefficients. In the following, we will always assume independent noise as in Equation 4. Generalizations to other forms of noise such as Gaussian noise are possible and are discussed in Appendix C. Here, we will focus on a simple uniform noise channel (Section 3.2) as frequently used in the neural compression literature [4, 5, 23, 36]. 4 Compression with quantization While the uniform noise channel has the advantage of being differentiable, there are still scenarios where we may want to use quantization. For instance, under some conditions universal quantization is known to be suboptimal with respect to mean squared error (MSE) [34, Theorem 5.5.1]. However, this assumes a fixed encoder and decoder. In the following, we show that quantization is a limiting case of universal quantization if we allow flexible encoders and decoders. Hence it is possible to recover any benefits quantization might have while maintaining a differentiable loss function. 4.1 Simulating quantization with uniform noise We first observe that applying rounding as the last step of an encoder and again as the first step of a decoder would eliminate the effects of any offset u 2 [ 0.5, 0.5), bbye+ ue = bye. (8) This suggests that we may be able to recover some of the benefits of hard quantization without sacrificing differentiability by using a smooth approximation to rounding, s(s(y) + u) ⇡ bye. (9) We are going to use the following function which is differentiable everywhere (Appendix C): s↵(y) = byc+ 1 2 tanh(↵r) tanh(↵/2) + 1 2 , where r = y byc 1 2 . (10) The function is visualized in Figure 1A. Its parameter ↵ controls the fidelity of the approximation: lim ↵!0 s↵(y) = y, lim ↵!1 s↵(y) = bye. (11) After observing a value z for random variable s↵(Y ) + U , we can do slightly better if our goal is to minimize the MSE of Y . Instead of soft rounding twice, the optimal reconstruction is obtained with r↵(s↵(y) + u), where r↵(z) = E[Y | s↵(Y ) + U = z]. (12) It is not difficult to see that p(y | z) / y 2 (s 1↵ (z 0.5), s 1↵ (z + 0.5)] p(y), (13) where evaluates to 1 if its argument is true and 0 otherwise. That is, the posterior over y is a truncated version of the prior distribution. If we assume that the prior is smooth enough to be approximately uniform in each interval, we have E[Y | s↵(Y ) + U = z] ⇡ s 1 ↵ (z 0.5) + s 1↵ (z + 0.5) 2 = s 1↵ (z 0.5) + 0.5. (14) where we have used that s↵(z + 1) = s↵(z) + 1. We will assume this form for r↵ going forward for which we still have that lim↵!1 r↵(s↵(y) + u) = bye, (15) that is, we recover hard quantization as a limiting case. Thus in cases where quantization is desirable, we can anneal ↵ towards hard quantization during training while still having a differentiable loss. Smooth approximations to quantization have been used previously though without the addition of noise [1]. Note that soft rounding without noise does not create a bottleneck since the function is invertible and the input coefficients can be fully recovered by the decoder. Thus, Equation 15 offers a more principled approach to approximating quantization. 4.2 Reducing the variance of gradients When ↵ is large, the derivatives of s↵ and r↵ tend to be close to zero with high probability and very large with low probability. This leads to gradients for the encoder with potentially large variance. To compensate we propose to analytically integrate out the uniform noise as follows. Let h : R ! R be a differentiable function and, as before, let U ⇠ U([ 0.5, 0.5)) be a uniform random variable. We are interested in computing the following derivative: d dy E[h(y + U)] = E d dy h(y + U) . (16) To get a low-variance estimate of the expectation’s derivative we could average over many samples of U . However, note that we also have d dy E[h(y + U)] = d dy Z y+0.5 y 0.5 h(y + u)du = h(y + 0.5) h(y 0.5). (17) That is, the gradient of the expectation can be computed analytically with finite differences. Furthermore, Equation 17 allows us to evaluate the derivative of the expectation even when h is not differentiable. Now consider the case where we apply h pointwise to a vector y +U with U ⇠ U([ 0.5, 0.5)D) followed by a multivariable function ` : RD ! R. Then @ @yi E [`(h(y +U))] = E @ @zi `(Z) Z=h(y+U) · @ @yi h(yi + Ui) (18) ⇡ E @ @zi `(Z) Z=h(y+U) · E @ @yi h(yi + Ui) (19) = E @ @zi `(Z) Z=h(y+U) · (h(yi + 0.5) h(yi 0.5)), (20) where the approximation in (19) is obtained by assuming the partial derivative @@zi `(Z) is uncorrelated with @@yih(yi + Ui). This would hold, for example, if ` were locally linear around h(y) such that its derivative is the same for any possible perturbed value h(y + u). Equation 20 corresponds to the following modification of backpropagation: the forward pass is computed in a standard manner (that is, evaluating `(h(y + u)) for a sampled instance u), but in the backward pass we replace the derivative @@yih(yi + ui) with its expected value, h(yi + 0.5) h(yi 0.5). Consider a model where soft-rounding follows the encoder, y = s↵(f(x)), and a factorial entropy model is used. The rate-distortion loss becomes P i E[log2 pi(yi + Ui)] + E [d(x, g(r↵(y +U)))] . (21) We can apply Equation 17 directly to the rate term to calculate the gradient of y (Figure 1B). For the distortion term we use Equation 20 where r↵ takes the role of h. Interestingly, for the softrounding function and its inverse the expected derivative takes the form of a straight-through gradient estimate [7]. That is, the expected derivative is always 1. Given a cumulative distribution cY for Y , the density of Z = s(Y ) + U can be shown to be ps(Y )+U (y) = cY (s 1(y) + 0.5) cY (s 1(y) 0.5). (22) We use this result to parametrize the density of Z (see Appendix E for details). Figure 1B illustrates such a model where Y is assumed to have a logistic distribution. 5 Experiments 5.1 Models We conduct experiments with two models: (a) a simple linear model and (b) a more complex model based on the hyperprior architecture proposed by Ballé et al. [6] and extended by Minnen et al. [23]. The linear model operates on 8x8 blocks similar to JPEG/JFIF [16]. It is implemented by setting the encoder f to be a convolution with a kernel size of 8, a stride of 8, and 192 output channels. The decoder g is set to the corresponding transposed convolution. Both are initialized independently with random orthogonal matrices [28]. For the density model we use the non-parametric model of Ballé et al. [6] but adjusted for soft-rounding (Appendix E). The hyperprior model is a much stronger model and is based on the (non-autoregressive) "Mean & Scale Hyperprior" architecture described by Minnen et al. [23]. Here, the coefficients produced by a neural encoder f , y = f(x), are mapped by a second encoder h to “hyper latents” v = h(y). Uniform noise is then applied to both sets of coefficients. A sample w = v + u1 is first transmitted and subsequently used to conditionally encode a sample z = y + u2. Finally, a neural decoder computes the reconstruction as x̂ = g(z). Following previous work, the conditional distribution is assumed to be Gaussian, p(y | w) = N (y;mµ(w),m (w)) , (23) where mµ and m are two neural networks. When integrating soft quantization into the architecture, we center the quantizer around the mean prediction mµ(w), y = s↵(f(x) mµ(w)), z = y + u2, x̂ = g(r↵(z) +mµ(w)). (24) and adjust the conditional density accordingly. This corresponds to transmitting the residual between y and the mean prediction mµ(w) (soft rounded) across the uniform noise channel. As for the linear model, we use a non-parametric model for the density of v. We consider the following three approaches for each model: Uniform Noise + Quantization: The model is trained with uniform noise but uses quantization for inference. This is the approach that is widely used in neural compression [e.g., 4, 5, 23, 36]. We refer to this setting as UN + Q or as the "test-time quantization baseline". Uniform Noise + Universal Quantization: Here the models use the uniform noise channel during training as well as for inference, eliminating the train-test mismatch. We refer to this setting as UN + UQ. As these models have the same training objective as UN + Q, we can train a single model and evaluate it for both settings. Uniform Noise + Universal Quantization + Soft Rounding: Here we integrate a soft quantizer (Section 4.1) into the uniform noise channel (both during training and at test time), recovering the potential benefits of quantization while maintaining the match between training and test phases using universal quantization. We refer to this setting as UN + UQ + SR. 5.2 Traininig The training examples are 256x256 pixel crops extracted from a set of 1M high resolution JPEG images collected from the internet. The images’ initial height and width ranges from 3,000 to 5,000 pixels but images were randomly resized such that the smaller dimension is between 533 and 1,200 pixels before taking crops. We optimized all models for mean squared error (MSE). The Adam optimizer [19] was applied for 2M steps with a batch size of 8 and a learning rate of 10 4 which is reduced to 10 5 after 1.6M steps. For the first 5,000 steps only the density models were trained and the learning rates of the encoder and decoder transforms were kept at zero. The training time was about 30 hours for the linear models and about 60 hours for the hyperprior models on an Nvidia V100 GPU. For the hyperprior models we set = 2i for i 2 { 6, · · · , 1} and decayed it by a factor of 110 after 200k steps. For the linear models we use slightly smaller = 0.4 · 2i and reduced it by a factor of 12 after 100k steps and again after 200k steps. For soft rounding we linearly annealed the parameter ↵ from 1 to 16 over the full 2M steps. At the end of training, ↵ is large enough that soft rounding gives near identical results to rounding. 5.3 Results We evaluate all models on the Kodak [20] dataset by computing the rate-distortion (RD) curve in terms of bits-per-pixel (bpp) versus peak signal-to-noise ratio (PSNR). In Figure 2A we show results for the linear model. When comparing the UN + UQ model which uses universal quantization to the test-time quantization baseline UN + Q, we see that despite the train-test mismatch using quantization improves the RD-performance at test-time (hatched area). However, looking at UN + UQ + SR, we obtain an improvement in terms of RD performance (shaded area) over the test-time quantization baseline. In Figure 3A we can observe similar albeit weaker effects for the hyperprior model. There is again a performance gap between UN + Q and UN + UQ. Introducing soft rounding again improves the RD performance, outperforming the test-time quantization baseline at low bitrates. The smaller difference can be explained by the deeper networks’ ability to imitate functionality otherwise performed by soft-rounding. For example, r↵ has a denoising effect which a powerful enough decoder can absorb. In Figure 2B we illustrate the effect of using expected gradients on the linear model. We did not observe big differences when using the same ↵ with s↵ and r↵ (not shown). However, using s7 and r16 we saw significant speedups in convergence and gaps in performance at high bitrates. For the hyperprior model we find expected gradients beneficial both in terms of performance and stability of training. In Figure 3B, we consider the UN + UQ + SR setting using either the linear schedule ↵ = 1, · · · , 16 or alternatively a fixed ↵ 2 {7, 13}, with and without expected gradients. We found that for ↵ > 7, the models would not train stably (especially at the higher bitrates) without expected gradients (both when annealing ↵ and for fixed ↵ = 13) and obtained poorer performance. In summary, for both the linear model and the hyperprior we observe that despite a train-test mismatch, the effect of the quantization is positive (UN + Q vs UN + UQ), but that further improvements can be gained by introducing soft rounding (UN + UQ + SR) into the uniform noise channel. Furthermore we find that expected gradients are helpful to speed up convergence and stabilize training. 6 Conclusion The possibility to efficiently communicate samples has only recently been studied in information theory [10, 11] and even more recently been recognized in machine learning [15, 13]. We connected this literature to an old idea from rate-distortion theory, uniformly dithered or universal quantization [27, 29, 37, 35], which allows us to efficiently communicate a sample from a uniform distribution. Unlike more general approaches, universal quantization is computationally efficient. This is only possible because it considers a constrained class of distributions, as shown in Lemma 1. Intriguingly, universal quantization makes it possible to implement an approach at test time which was already popular for training neural networks [4]. This allowed us to study and eliminate existing gaps between training and test losses. Furthermore, we showed that interpolating between the two approaches in a principled manner is possible using soft-rounding functions. For ease of training and evaluation our empirical findings were based on MSE. We found that already here a simple change can lead to improved performance, especially for models of low complexity. However, generative compression [26, 2] may benefit more strongly from compression without quantization. Theis et al. [31] showed that uniform noise and quantization can be perceptually very different, suggesting that adversarial and other perceptual training losses may be more sensitive to a mismatch between training and test phases. Roberts [27] found that replacing quantization with dithered quantization can improve picture quality when applied directly to graycale pixels. Similarly, we find that reconstructions of the linear model have visible blocking artefacts when using quantization, as would be expected given the model’s similarity to JPEG/JFIF [16]. In contrast, universal quantization masks the blocking artefacts almost completely at the expense of introducing grain (Appendix G). Finally, here we only studied one-dimensional uniform dither. Two generalizations are discussed in Appendix C and may provide additional advantages. We hope that our paper will inspire work into richer classes of distributions which are easy to communicate in a computationally efficient manner. Broader Impact Poor internet connectivity and high traffic costs are still a reality in many developing countries [3]. But also in developed countries internet connections are often poor due to congestion in crowded areas or insufficient mobile network coverage. By improving compression rates, neural compression has the potential to make information more broadly available. About 79% of global IP traffic is currently made up of videos [17]. This means that work on image and video compression in particular has the potential to impact a lot of people. Assigning fewer bits to one image is only possible by simultaneously assigning more bits to other images. Care needs to be taken to make sure that training sets are representative. Generative compression in particular bears the risk of misrepresenting content but is outside the scope of this paper. Acknowledgments We would like to thank Johannes Ballé for helpful discussions and valuable comments on this manuscript. This work was performed and funded by Google.
1. What is the focus and contribution of the paper regarding quantization noise? 2. What are the strengths of the proposed approach, particularly in its mathematical explanation? 3. What are the weaknesses of the paper, especially regarding its experimental scope and practical applicability? 4. How does the reviewer assess the clarity and reproducibility of the paper's content? 5. Are there any questions or concerns regarding the limitation of the proposed method, specifically its definition and training complexity?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes an approach to account for quantization noise based on Ziv's universal quantization principles. Strengths The addressed problem is relevant and timely and may have potentially a broad impact. The paper is very clear and very well written. The mathematical explaination is also clear, even though it owes a lot to Ziv. Weaknesses Experiments with models less complex than scalable hyperpriors and more complex than linear would have given a better understanding of the effectiveness of the technique. For example, the authors could experiment with a simple autoencoder with a residual encoder. It is not totally clear how to practically apply the propsoed scheme due to the lack of a section that explains how to apply it, which may limits the reprodicibility of teh results. This work is limited by definition to uniform noise. What is the extra training complexity introduced by the proposed method? can it be quantified ? The experiments are performed on one dataset only (altough large), which is limiting. Is the source code provided? There are some typos, like for example in the title of sec. 5.2 "Traininig"
NIPS
Title Levenshtein Transformer Abstract Modern neural sequence generation models are built to either generate tokens step-by-step from scratch or (iteratively) modify a sequence of tokens bounded by a fixed length. In this work, we develop Levenshtein Transformer, a new partially autoregressive model devised for more flexible and amenable sequence generation. Unlike previous approaches, the basic operations of our model are insertion and deletion. The combination of them facilitates not only generation but also sequence refinement allowing dynamic length changes. We also propose a set of new training techniques dedicated at them, effectively exploiting one as the other’s learning signal thanks to their complementary nature. Experiments applying the proposed model achieve comparable or even better performance with much-improved efficiency on both generation (e.g. machine translation, text summarization) and refinement tasks (e.g. automatic post-editing). We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing. 1 1 Introduction Neural sequence generation models are widely developed and deployed in tasks such as machine translation (Bahdanau et al., 2015; Vaswani et al., 2017). As we examine the current frameworks, the most popular autoregressive models generate tokens step-by-step. If not better, recent nonautoregressive approaches (Gu et al., 2018; Kaiser et al., 2018; Lee et al., 2018) have proved it possible to perform generation within a much smaller number of decoding iterations. In this paper, we propose Levenshtein Transformer (LevT), aiming to address the lack of flexibility of the current decoding models. Notably, in the existing frameworks, the length of generated sequences is either fixed or monotonically increased as the decoding proceeds. This remains incompatible with human-level intelligence where humans can revise, replace, revoke or delete any part of their generated text. Hence, LevT is proposed to bridge this gap by breaking the in-so-far standardized decoding mechanism and replacing it with two basic operations — insertion and deletion. We train the LevT using imitation learning. The resulted model contains two policies and they are executed in an alternate manner. Empirically, we show that LevT achieves comparable or better results than a standard Transformer model on machine translation and summarization, while maintaining the efficiency advantages benefited from parallel decoding similarly to (Lee et al., 2018). With this model, we argue that the decoding becomes more flexible. For example, when the decoder is given an empty token, it falls back to a normal sequence generation model. On the other hand, the decoder acts as a refinement model when the initial state is a low-quality generated sequence. Indeed, we show that a LevT trained from machine translation is directly applicable to translation post-editing without 1Codes for reproducing this paper are released in https://github.com/pytorch/fairseq/tree/ master/examples/nonautoregressive_translation 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. any change. This would not be possible with any framework in the literature because generation and refinement are treated as two different tasks due to the model’s inductive bias. One crucial component in LevT framework is the learning algorithm. We leverage the characteristics of insertion and deletion — they are complementary but also adversarial. The algorithm we propose is called “dual policy learning”. The idea is that when training one policy (insertion or deletion), we use the output from its adversary at the previous iteration as input. An expert policy, on the other hand, is drawn to provide a correction signal. Despite that, in theory, this learning algorithm is applicable to other imitation learning scenarios where a dual adversarial policy exists, in this work we primarily focus on a proof-of-concept of this algorithm landing at training the proposed LevT model. To this end, we summarize the contributions as follows: • We propose Levenshtein Transformer (LevT), a new sequence generation model composed of the insertion and deletion operations. This model achieves comparable or even better results than a strong Transformer baseline in both machine translation and text summarization, but with much better efficiency (up to ×5 speed-up in terms of actual machine execution time); • We propose a corresponding learning algorithm under the theoretical framework of imitation learning, tackling the complementary and adversarial nature of the dual policies; • We recognize our model as a pioneer attempt to unify sequence generation and refinement, thanks to its built-in flexibility. With this unification, we empirically validate the feasibility of applying a LevT model trained by machine translation directly to translation post-editing, without any change. 2 Problem Formulation 2.1 Sequence Generation and Refinement We unify the general problems of sequence generation and refinement by casting them to a Markov Decision Process (MDP) defined by a tuple (Y,A, E ,R,y0). We consider the setup consisting an agent interacting with an environment E which receives the agent’s editing actions and returns the modified sequence. We define Y = VNmax as a set of discrete sequences up to length Nmax where V is a vocabulary of symbols. At every decoding iteration, the agent receives an input y drawn from scratch or uncompleted generation, chooses an action a and gets a reward r. We use A to denote the set of actions and R for the reward function. Generally the reward function R measures the distance between the generation and the ground-truth sequence,R(y) = −D(y,y∗) which can be any distance measurement such as the Levenshtein distance (Levenshtein, 1965). It is crucial to incorporate y0 ∈ Y into the our formulation. As the initial sequence, the agent receives—when y0 is an already generated sequence from another system, the agent essentially learns to do refinement while it falls back to generation if y0 is an empty sequence. The agent is modeled by a policy, π, that maps the current generation over a probability distribution over A. That is, π : Y → P (A). 2.2 Actions: Deletion & Insertion Following the above MDP formulation, with a subsequence yk = (y1, y2, ..., yn), the two basic actions – deletion and insertion – are called to generate yk+1 = E(yk,ak+1). Here we let y1 and yn be special symbols <s> and </s>, respectively. Since we mainly focus on the policy of a single round generation, the superscripts are omitted in this section for simplicity. For conditional generation like MT, our policy also includes an input of source information x which is also omitted here. Deletion The deletion policy reads the input sequence y, and for every token yi ∈ y, the deletion policy πdel(d|i,y) makes a binary decision which is 1 (delete this token) or 0 (keep it). We additionally constrain πdel(0|1,y) = πdel(0|n,y) = 1 to avoid sequence boundary being broken. The deletion classifier can also be seen as a fine-grained discriminator used in GAN (Goodfellow et al., 2014) where we predict “fake” or “real” labels for every predicted token. Insertion In this work, it is slightly more complex to build the insertion atomic because it involves two phases: placeholder prediction and token prediction so that it is able to insert multiple tokens at the same slot. First, among all the possible inserted slots (yi, yi+1) in y, πplh(p|i,y) predicts the possibility of adding one or several placeholders. In what follows, for every placeholder predicted as above, a token prediction policy πtok(t|i,y) replaces the placeholders with actual tokens in the vocabulary. The two-stage insertion process can also be viewed as a hybrid of Insertion Transformer (Stern et al., 2019) and masked language model (MLM, Devlin et al., 2018; Ghazvininejad et al., 2019). Policy combination Recall that our two operations are complementary. Hence we combine them in an alternate fashion. For example in sequence generation from the empty, insertion policy is first called and it is followed by deletion, and then repeat till the certain stopping condition is fulfilled. Indeed, it is possible to leverage the parallelism in this combination. We essentially decompose one iteration of our sequence generator into three phases: “delete tokens – insert placeholders – replace placeholders with new tokens”. Within each stage, all operations are performed in parallel. More precisely, given the current sequence y = (y0, . . . , yn), and suppose the action to predict is a = {d0, . . . dn︸ ︷︷ ︸ d ; p0, . . . , pn−1︸ ︷︷ ︸ p ; t10, . . . t p0 0 , . . . , t pn−1 n−1︸ ︷︷ ︸ t }, the policy for one iteration is: π(a|y) = ∏ di∈d πdel(di|i,y) · ∏ pi∈p πplh(pi|i,y′) · ∏ ti∈t πtok(ti|i,y′′), (1) where y′ = E(y,d) and y′′ = E(y′,p). We parallelize the computation within each sub-tasks. 3 Levenshtein Transformer In this section, we cover the specs of Levenshtein Transformer and the dual-policy learning algorithm. Overall our model takes a sequence of tokens (or none) as the input then iteratively modify it by alternating between insertion and deletion, until the two policies combined converge. We describe the detailed learning and inference algorithms in the Appendix. 3.1 Model We use Transformer (Vaswani et al., 2017) as the basic building block. For conditional generation, the source x is included in each TransformerBlock. The states from the l-th block are: h (l+1) 0 ,h (l+1) 1 , ...,h (l+1) n = { Ey0 + P0, Ey1 + P1, ..., Eyn + Pn, l = 0 TransformerBlockl(h (l) 0 ,h (l) 1 , ...,h (l) n ), l > 0 (2) where E ∈ R|V|×dmodel and P ∈ RNmax×dmodel are the token and position embeddings, respectively. We show the illustration of the proposed LevT model for one refinement (delete, insert) as Figure 1. Policy Classifiers The decoder outputs (h0,h2, ...,hn) are passed to three policy classifiers: 1. Deletion Classifier: LevT scans over the input tokens (except for the boundaries) and predict “deleted” (0) or “kept” (1) for each token position, πdelθ (d|i,y) = softmax ( hi ·A> ) , i = 1, . . . n− 1, (3) where A ∈ R2×dmodel , and we always keep the boundary tokens. 2. Placeholder Classifier: LevT predicts the number of tokens to be inserted at every consecutive position pairs, by casting the representation to a categorical distribution: πplhθ (p|i,y) = softmax ( concat(hi,hi+1) ·B> ) , i = 0, . . . n− 1, (4) where B ∈ R(Kmax+1)×(2dmodel). Based on the number (0 ∼ Kmax) of tokens it predicts, we insert the considered number of placeholders at the current position. In our implementation, placehoder is represented by a special token <PLH> which was reserved in the vocabulary. 3. Token Classifier: following the placeholder prediction, LevT needs to fill in tokens replacing all the placeholders. This is achieved by training a token predictor as follow: πtokθ (t|i,y) = softmax ( hi · C> ) , ∀yi = <PLH>, (5) where C ∈ R|V|×dmodel with parameters being shared with the embedding matrix. Weight Sharing Our default implementation always assumes the three operations to share the same Transformer backbone to benefit features learned from other operations. However, it is also possible to disable weight sharing and train separate decoders for each operations, which increases the capacity of the model while does not affect the overall inference time. Early Exit Although it is parameter-efficient to share the same Transformer architecture across the above three heads, there is room for improvement as one decoding iteration requires three full passes of the network. To make trade-off between performance and computational cost, we propose to perform early exit (attaching the classifier to an intermediate block instead of the last one) for πdel and πplh to reduce computation while keeping πtok always based on the last block, considering that token prediction is usually more challenging than the other two tasks. 3.2 Dual-policy Learning Imitation Learning We use imitation learning to train the Levenshtein Transformer. Essentially we let the agent imitate the behaviors that we draw from some expert policy π∗. The expert policy is derived from direct usage of ground-truth targets or less noisy version filtered by sequence distillation (Kim and Rush, 2016). The objective is to maximize the following expectation: Eydel∼dπ̃del d∗∼π∗ ∑ d∗i∈d∗ log πdelθ (d ∗ i |i,ydel)︸ ︷︷ ︸ Deletion Objective +Eyins∼dπ̃ins p∗,t∗∼π∗ ∑ p∗i∈p∗ log πplhθ (p ∗ i |i,yins) + ∑ t∗i∈t∗ log πtokθ (t ∗ i |i,y′ins) ︸ ︷︷ ︸ Insertion Objective , where y′ins is the output after inserting palceholders p ∗ upon yins. π̃del, π̃ins are the roll-in polices and we repeatedly draw states (sequences) from their induced state distribution dπ̃del , dπ̃ins . These states are first executed by the expert policy returning the suggested actions by the expert, and then we maximize the conditional log-likelihood over them. By definition, the roll-in policy determines the state distribution fed to πθ during training. In this work, we have two strategies to construct the roll-in policy — adding noise to the ground-truth or using the output from the adversary policy. Figure 2 shows a diagram of this learning paradigm. We formally write down the roll-in policies as follows. 1. Learning to Delete: we design the π̃del as a stochastic mixture between the initial input y0 or the output by applying insertion from the model with some mixture factor α ∈ [0, 1]: dπ̃del = {y0 if u < α else E ( E (y′,p∗) , t̃ ) , p∗ ∼ π∗, t̃ ∼ πθ} (6) where u ∼ Uniform[0, 1] and y′ is any sequence ready to insert tokens. t̃ is obtained by sampling instead of doing argmax from Eq. (5). 2. Learning to Insert: similar to the deletion step, we apply a mixture of the deletion output and a random word dropping sequence of the round-truth, inspired by recent advances of training masked language model (Devlin et al., 2018). We use random dropping as a form of noise injection to encourage more exploration. Let β ∈ [0, 1] and u ∼ Uniform[0, 1], dπ̃ins = {E ( y0,d∗ ) , d∗ ∼ π∗ if u < β else E ( y∗, d̃ ) , d̃ ∼ πRND} (7) Expert Policy It is crucial to construct an expert policy in imitation learning which cannot be too hard or too weak to learn from. Specifically, we considered two types of experts: 1. Oracle: One way is to build an oracle which accesses to the ground-truth sequence. It returns the optimal actions a∗ (either oracle insertion p∗, t∗ or oracle deletion d∗) by: a∗ = argmin a D(y∗, E(y,a)) (8) Here, we use the Levenshtein distance (Levenshtein, 1965)2 as D considering it is possible to obtain the action suggestions efficiently by dynamic programming. 2. Distillation: We also explore to use another teacher model to provide expert policy, which is known as sequence-level knowledge distillation (Kim and Rush, 2016). This technique has been widely used in previous approaches for nonauoregressive generation (Gu et al., 2018). More precisely, we first train an autoregressive teacher model using the same datasets and then replace the ground-truth sequence y∗ by the beam-search result of this teacher-model, yAR. We use the same mechanism to find the suggested option as using the ground-truth oracle. 3.3 Inference Greedy Decoding At inference time, we apply the trained model over the initial sequence y0 for several iterations. We greedily pick up the actions associated with high probabilities in Eq. (3)(4)(5). Moreover, we find that using search (instead of greedy decoding) or nosiy parallel decoding (Cho, 2016) does not yield much gain in LevT. This observation is quite opposite to what has been widely discovered in autoregressive decoding. We hypothesize there may be two reasons leading to this issue: (i) The local optimal point brought by greedy decoding in autoregressive models is often far from the optimal point globally. Search techniques resolve this issue with tabularization. In our case, however, because LevT inserts or deletes tokens dynamically, it could easily revoke the tokens that are found sub-optimal and re-insert better ones; (ii) the log-probability of LevT is not a good metric to select the best output. However, we do believe to see more improvements if we include an external re-ranker, e.g. an autoregressive teacher model. We leave this discussion in the future work. Termination Condition The decoding stops when one of the following two conditions is fulfilled: 1. Looping: Generation is terminated if two consecutive refinement iterations return the same output which can be (i) there are no words to delete or insert; (ii) the agent gets stuck in an infinite loop: i.e. the insertion and deletion counter each other and keep looping. 2. Timeout: We further set a maximum number of iterations (timeout) to guarantee a constant-time complexity in the worst case (Lee et al., 2018; Ghazvininejad et al., 2019). Penalty for Empty Placeholders Similar to Stern et al. (2019), we add a penalty to insert “empty” placeholder in decoding. Overly inserting “empty” placeholders may result in shorter output. A penalty term γ ∈ [0, 3] is subtracted from the logits of 0 in Eq. (4). 2We only consider the variant which only computes insertion and deletion. No substitution is considered. the inserted tokens in purple and deleted tokens with red strikethrough . 4 Experiments We validate the efficiency, effectiveness, and flexibility of Levenshtein Transformer extensively across three different tasks — machine translation (MT), text summarization (TS) and automatic post-editing (APE) for machine translation, from both generation (§4.1) and refinement (§4.2) perspectives. 4.1 Sequence Generation For the sequence generation perspective, we evaluate LevT model on MT and TS. As a special case, sequence generation assumes empty y0 = <S></S> as input and no initial deletion is applied. Data & Evaluation We use three diversified language pairs for MT experiments: WMT’16 Romanian-English (Ro-En)3, WMT’14 English-German (En-De)4 and WAT2017 Small-NMT English-Japanese (En-Ja, Nakazawa et al., 2017)5. The TS experiments use preprocessed data from the Annotated English Gigaword (Gigaword, Rush et al., 2015)6. We learn byte-pair encoding (BPE, Sennrich et al., 2016) vocabulary on tokenized data. Detailed dataset statistics can be found in the Appendix. For evaluation metrics, we use BLEU (Papineni et al., 2002) for MT and ROUGE-1,2,L (Lin, 2004) for TS. Before computing the BLEU scores for Japanese output, we always segment Japanese words using KyTea 7. Models & Training We adopt the model architecture of Transformer base (Vaswani et al., 2017) for the proposed LevT model and the autoregressive baseline. All the Transformer-based models are 3http://www.statmt.org/wmt16/translation-task.html 4http://www.statmt.org/wmt14/translation-task.html 5http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2017/snmt/index.html 6https://github.com/harvardnlp/sent-summary 7http://www.phontron.com/kytea/ trained on 8 Nvidia Volta GPUs with maximum 300K steps and a total batch-size of around 65, 536 tokens per step (We leave more details to the Appendix). Overall results We present our main results on the generation quality and decoding speed in Table 1. We measure the speed by the averaged generation latency of generating one sequence at a time on single Nvidia V100 GPU. To remove the implementation bias, we also present the number of decoder iterations as a reference. It can be concluded that for both MT and summarization tasks, our proposed LevT achieves comparable and sometimes better generation quality compared to the strong autoregressive baseline, while LevT is much more efficient at decoding. A translation example is shown in Figure 3 and we leave more in Appendix. We conjecture that this is due to that the output of the teacher model possesses fewer modes and much less noisy than the real data. Consequently, LevT needs less number of iterations to converge to this expert policy. Ablation on Efficiency As shown in Figure 4a, we plot the average number of iterations over the length of input over a monolingual corpus. LevT learns to properly adjust the decoding time accordingly. We also explore the variants of “early exit” where we denote LevT(m-n) as a model with m and n blocks for deletion (Eq. (3)) and placeholder prediction (Eq. (4)) respectively. Figure 4b shows that although it compromises the quality a bit, our model with early exit achieves up to ×5 speed-up (execution time) comparing against a strong autoregressive Transformer using beam-search. Ablation on Weight Sharing We also evaluate LevT with different weight sharing as noted in §3.1. The results of models trained with oracle or distillation are listed in Table 2a. We observe that weight-sharing is beneficial especially between the two insertion operations (placeholder and token classifiers). Also, it shows another +0.5 BLEU improvement by not sharing the deletion operation with insertion compared to the default setting, which may indicate that insertion and deletion capture complementary information, requiring larger capacity by learning them separately. Importance of mixture roll-in policy We perform an ablation study on the learning algorithm. Specifically, we train a model with no mixing of the πθ in Equation (6). We name this experiment by DAE due to its resemblance to a denoising autoencoder. We follow closely a standard pipeline established by Lee et al. (2018). Table 2b shows this comparison. As we can see that the deletion loss from DAE is much smaller while the generation BLEU score is inferior. We conjecture that this is caused by the mismatch between the states from the model and the roll-in policy in training the DAE. v.s. Exiting Refinement-based Models Table 2a also includes results from two relevant recent works which also incorporate iterative refinement in non-autoregressive sequence generation. For fair comparison, we use the result with length beam 1 from Ghazvininejad et al. (2019). Although both approaches use similar “denosing” objectives to train the refinement process, our model explicitly learns “insertion” and “deletion” in a dual-policy learning fashion, and outperforms both models. 4.2 Sequence Refinement We evaluate LevT’s capability of refining sequence outputs on the APE task. In this setting, inputs are pairs of the source sequence and a black-box MT system generation. The ground-truth outputs are from real human edits with expansion using synthetic data. Dataset We follow a normal protocol in the synthetic APE experiments (Grangier and Auli, 2017): we first train the input MT system on half of the dataset. Then we will train a refinement model on the other half based on the output produced by the MT model trained in the previous phase. For the real APE tasks, we use the data from WMT17 Automatic Post-Editing Shared Task8 on En-De. It contains both real PE triples and a large-scale synthetic corpus. Models & Evaluation The baseline model is a standard Transformer encoding the concatenation of the source and the MT system’s output. For the MT system here, we want some imperfect systems that need to be refined. We consider a statistical phrase-based MT system (PBMT, Koehn et al., 2003) and an RNN-based NMT system (Bahdanau et al., 2015). Apart from BLEU scores, we additionally apply translation error rate (TER, Snover et al., 2006) as it is widely used in the APE literature. 8http://www.statmt.org/wmt17/ape-task.html Overall results We show the major comparison in Table 3. When training from scratch, LevT consistently improves the performance of the input MT system (either PBMT or NMT). It also achieves better performance than the autoregressive Transformer in most of the cases. Pre-training on MT Thanks to the generality of the LevT model, we show it is feasible to directly apply the LevT model trained by generation onto refinement tasks — in this case — MT and APE. We name this a “zero-shot post-editing” setting. According to Table 3, the pre-trained MT models are always capable of improving the initial MT input in the synthetic tasks. The real APE task, however, differs quite a bit from the synthetic tasks because human translators normally only fix a few spotted errors. This ends up with very high BLEU scores even for the “Do-nothing” column. However, the pre-trained MT model achieves the best results by fine-tuning on the PE data indicating that LevT is able to leverage the knowledge for generation and refinement. Collaborate with Oracle Thanks to the saperation of insertion and deletion operations, LevT has better interpretability and controllability. For example, we test the ability that LevT adapts oracle (e.g. human translators) instructions. As shown in Figure 5, both MT and PE tasks have huge improvement if every step the oracle deletion is given. This goes even further if the oracle provides both the correct deletion and the number of placehoders to insert. It also sheds some light upon computer-assisted text editing for human translators. 5 Related Work Non-Autoregressive and Non-Monotonic Decoding Breaking the autoregressive constraints and monotonic (left-to-right) decoding order in classic neural sequence generation systems has recently attracted much interest. Stern et al. (2018); Wang et al. (2018) designed partially parallel decoding schemes to output multiple tokens at each step. Gu et al. (2018) proposed a non-autoregressive framework using discrete latent variables, which was later adopted in Lee et al. (2018) as iterative refinement process. Ghazvininejad et al. (2019) introduced the masked language modeling objective from BERT (Devlin et al., 2018) to non-autoregressively predict and refine translations. Welleck et al. (2019); Stern et al. (2019); Gu et al. (2019) generate translations non-monotonically by adding words to the left or right of previous ones or by inserting words in arbitrary order to form a sequence. Editing-Based Models Several prior works have explored incorporating “editing” operations for sequence generation tasks. For instance, Novak et al. (2016) predict and apply token substitutions iteratively on phase-based MT system outputs using convolutional neural network. QuickEdit (Grangier and Auli, 2017) and deliberation network (Xia et al., 2017) both consist of two autoregressive decoders where the second decoder refines the translation generated by the first decoder. Guu et al. (2018) propose a neural editor which learned language modeling by first retrieving a prototype and then editing over that. Freitag et al. (2019) correct patterned errors in MT system outputs using transformer models trained on monolingual data. Additionally, the use of Levenshtein distance with dynamic programming as the oracle policy were also proposed in Sabour et al. (2018); Dong et al. (2019). Different from these work, the proposed model learns a non-autoregressive model which simultaneously inserts and deletes multiple tokens iteratively. 6 Conclusion We propose Levenshtein Transformer, a neural sequence generation model based on insertion and deletion. The resulted model achieves performance and decoding efficiency, and embraces sequence generation to refinement in one model. The insertion and deletion operations are arguably more similar to how human writes or edits text. For future work, it is potential to extend this model to human-in-the-loop generation. Acknowledgement We would like to thank Kyunghyun Cho, Marc’Aurelio Ranzato, Douwe Kiela, Qi Liu and our colleagues at Facebook AI Research for valuable feedback, discussions and technical assistance.
1. Can you provide more details about the "delete-and-insert" procedure in the Levenshtein Transformer? How does it differ from traditional token-based generation methods? 2. Could you elaborate on the training process using imitation learning? How does the expert policy derive from gold data or a pre-trained auto-regressive teacher model? 3. In what ways does the Levenshtein Transformer improve upon transformer baselines in terms of accuracy and efficiency? Are there any trade-offs between these two aspects? 4. What are some potential limitations or challenges of applying the Levenshtein Transformer to real-world applications, such as language translation or text summarization? 5. Are there any plans for future research directions or improvements to the current model?
Review
Review [update] Thanks for the revision and clarification! I revised my review accordingly. ========================= This submission introduces Levenshtein Transformer, a non-autoregressive model for text generation and post editing. Instead of generating tokens left-to-right, it repeats a `delete-and-insert` procedure. More specifically, starting from an initial string, it keeps deletes tokens from or insert new tokens into the outputs, until convergence is met. The model is trained with imitation learning, where expert policy derived from gold data or a pertained auto-regressive teacher model is explored. Experiments on text summarization, machine translation, and post editing shows that the proposed model outperforms the transformer baselines in both accuracy and efficiency. Overall I think this is an interesting work. Yet I do have some confusion in both the technical part and the experimental part.
NIPS
Title Levenshtein Transformer Abstract Modern neural sequence generation models are built to either generate tokens step-by-step from scratch or (iteratively) modify a sequence of tokens bounded by a fixed length. In this work, we develop Levenshtein Transformer, a new partially autoregressive model devised for more flexible and amenable sequence generation. Unlike previous approaches, the basic operations of our model are insertion and deletion. The combination of them facilitates not only generation but also sequence refinement allowing dynamic length changes. We also propose a set of new training techniques dedicated at them, effectively exploiting one as the other’s learning signal thanks to their complementary nature. Experiments applying the proposed model achieve comparable or even better performance with much-improved efficiency on both generation (e.g. machine translation, text summarization) and refinement tasks (e.g. automatic post-editing). We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing. 1 1 Introduction Neural sequence generation models are widely developed and deployed in tasks such as machine translation (Bahdanau et al., 2015; Vaswani et al., 2017). As we examine the current frameworks, the most popular autoregressive models generate tokens step-by-step. If not better, recent nonautoregressive approaches (Gu et al., 2018; Kaiser et al., 2018; Lee et al., 2018) have proved it possible to perform generation within a much smaller number of decoding iterations. In this paper, we propose Levenshtein Transformer (LevT), aiming to address the lack of flexibility of the current decoding models. Notably, in the existing frameworks, the length of generated sequences is either fixed or monotonically increased as the decoding proceeds. This remains incompatible with human-level intelligence where humans can revise, replace, revoke or delete any part of their generated text. Hence, LevT is proposed to bridge this gap by breaking the in-so-far standardized decoding mechanism and replacing it with two basic operations — insertion and deletion. We train the LevT using imitation learning. The resulted model contains two policies and they are executed in an alternate manner. Empirically, we show that LevT achieves comparable or better results than a standard Transformer model on machine translation and summarization, while maintaining the efficiency advantages benefited from parallel decoding similarly to (Lee et al., 2018). With this model, we argue that the decoding becomes more flexible. For example, when the decoder is given an empty token, it falls back to a normal sequence generation model. On the other hand, the decoder acts as a refinement model when the initial state is a low-quality generated sequence. Indeed, we show that a LevT trained from machine translation is directly applicable to translation post-editing without 1Codes for reproducing this paper are released in https://github.com/pytorch/fairseq/tree/ master/examples/nonautoregressive_translation 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. any change. This would not be possible with any framework in the literature because generation and refinement are treated as two different tasks due to the model’s inductive bias. One crucial component in LevT framework is the learning algorithm. We leverage the characteristics of insertion and deletion — they are complementary but also adversarial. The algorithm we propose is called “dual policy learning”. The idea is that when training one policy (insertion or deletion), we use the output from its adversary at the previous iteration as input. An expert policy, on the other hand, is drawn to provide a correction signal. Despite that, in theory, this learning algorithm is applicable to other imitation learning scenarios where a dual adversarial policy exists, in this work we primarily focus on a proof-of-concept of this algorithm landing at training the proposed LevT model. To this end, we summarize the contributions as follows: • We propose Levenshtein Transformer (LevT), a new sequence generation model composed of the insertion and deletion operations. This model achieves comparable or even better results than a strong Transformer baseline in both machine translation and text summarization, but with much better efficiency (up to ×5 speed-up in terms of actual machine execution time); • We propose a corresponding learning algorithm under the theoretical framework of imitation learning, tackling the complementary and adversarial nature of the dual policies; • We recognize our model as a pioneer attempt to unify sequence generation and refinement, thanks to its built-in flexibility. With this unification, we empirically validate the feasibility of applying a LevT model trained by machine translation directly to translation post-editing, without any change. 2 Problem Formulation 2.1 Sequence Generation and Refinement We unify the general problems of sequence generation and refinement by casting them to a Markov Decision Process (MDP) defined by a tuple (Y,A, E ,R,y0). We consider the setup consisting an agent interacting with an environment E which receives the agent’s editing actions and returns the modified sequence. We define Y = VNmax as a set of discrete sequences up to length Nmax where V is a vocabulary of symbols. At every decoding iteration, the agent receives an input y drawn from scratch or uncompleted generation, chooses an action a and gets a reward r. We use A to denote the set of actions and R for the reward function. Generally the reward function R measures the distance between the generation and the ground-truth sequence,R(y) = −D(y,y∗) which can be any distance measurement such as the Levenshtein distance (Levenshtein, 1965). It is crucial to incorporate y0 ∈ Y into the our formulation. As the initial sequence, the agent receives—when y0 is an already generated sequence from another system, the agent essentially learns to do refinement while it falls back to generation if y0 is an empty sequence. The agent is modeled by a policy, π, that maps the current generation over a probability distribution over A. That is, π : Y → P (A). 2.2 Actions: Deletion & Insertion Following the above MDP formulation, with a subsequence yk = (y1, y2, ..., yn), the two basic actions – deletion and insertion – are called to generate yk+1 = E(yk,ak+1). Here we let y1 and yn be special symbols <s> and </s>, respectively. Since we mainly focus on the policy of a single round generation, the superscripts are omitted in this section for simplicity. For conditional generation like MT, our policy also includes an input of source information x which is also omitted here. Deletion The deletion policy reads the input sequence y, and for every token yi ∈ y, the deletion policy πdel(d|i,y) makes a binary decision which is 1 (delete this token) or 0 (keep it). We additionally constrain πdel(0|1,y) = πdel(0|n,y) = 1 to avoid sequence boundary being broken. The deletion classifier can also be seen as a fine-grained discriminator used in GAN (Goodfellow et al., 2014) where we predict “fake” or “real” labels for every predicted token. Insertion In this work, it is slightly more complex to build the insertion atomic because it involves two phases: placeholder prediction and token prediction so that it is able to insert multiple tokens at the same slot. First, among all the possible inserted slots (yi, yi+1) in y, πplh(p|i,y) predicts the possibility of adding one or several placeholders. In what follows, for every placeholder predicted as above, a token prediction policy πtok(t|i,y) replaces the placeholders with actual tokens in the vocabulary. The two-stage insertion process can also be viewed as a hybrid of Insertion Transformer (Stern et al., 2019) and masked language model (MLM, Devlin et al., 2018; Ghazvininejad et al., 2019). Policy combination Recall that our two operations are complementary. Hence we combine them in an alternate fashion. For example in sequence generation from the empty, insertion policy is first called and it is followed by deletion, and then repeat till the certain stopping condition is fulfilled. Indeed, it is possible to leverage the parallelism in this combination. We essentially decompose one iteration of our sequence generator into three phases: “delete tokens – insert placeholders – replace placeholders with new tokens”. Within each stage, all operations are performed in parallel. More precisely, given the current sequence y = (y0, . . . , yn), and suppose the action to predict is a = {d0, . . . dn︸ ︷︷ ︸ d ; p0, . . . , pn−1︸ ︷︷ ︸ p ; t10, . . . t p0 0 , . . . , t pn−1 n−1︸ ︷︷ ︸ t }, the policy for one iteration is: π(a|y) = ∏ di∈d πdel(di|i,y) · ∏ pi∈p πplh(pi|i,y′) · ∏ ti∈t πtok(ti|i,y′′), (1) where y′ = E(y,d) and y′′ = E(y′,p). We parallelize the computation within each sub-tasks. 3 Levenshtein Transformer In this section, we cover the specs of Levenshtein Transformer and the dual-policy learning algorithm. Overall our model takes a sequence of tokens (or none) as the input then iteratively modify it by alternating between insertion and deletion, until the two policies combined converge. We describe the detailed learning and inference algorithms in the Appendix. 3.1 Model We use Transformer (Vaswani et al., 2017) as the basic building block. For conditional generation, the source x is included in each TransformerBlock. The states from the l-th block are: h (l+1) 0 ,h (l+1) 1 , ...,h (l+1) n = { Ey0 + P0, Ey1 + P1, ..., Eyn + Pn, l = 0 TransformerBlockl(h (l) 0 ,h (l) 1 , ...,h (l) n ), l > 0 (2) where E ∈ R|V|×dmodel and P ∈ RNmax×dmodel are the token and position embeddings, respectively. We show the illustration of the proposed LevT model for one refinement (delete, insert) as Figure 1. Policy Classifiers The decoder outputs (h0,h2, ...,hn) are passed to three policy classifiers: 1. Deletion Classifier: LevT scans over the input tokens (except for the boundaries) and predict “deleted” (0) or “kept” (1) for each token position, πdelθ (d|i,y) = softmax ( hi ·A> ) , i = 1, . . . n− 1, (3) where A ∈ R2×dmodel , and we always keep the boundary tokens. 2. Placeholder Classifier: LevT predicts the number of tokens to be inserted at every consecutive position pairs, by casting the representation to a categorical distribution: πplhθ (p|i,y) = softmax ( concat(hi,hi+1) ·B> ) , i = 0, . . . n− 1, (4) where B ∈ R(Kmax+1)×(2dmodel). Based on the number (0 ∼ Kmax) of tokens it predicts, we insert the considered number of placeholders at the current position. In our implementation, placehoder is represented by a special token <PLH> which was reserved in the vocabulary. 3. Token Classifier: following the placeholder prediction, LevT needs to fill in tokens replacing all the placeholders. This is achieved by training a token predictor as follow: πtokθ (t|i,y) = softmax ( hi · C> ) , ∀yi = <PLH>, (5) where C ∈ R|V|×dmodel with parameters being shared with the embedding matrix. Weight Sharing Our default implementation always assumes the three operations to share the same Transformer backbone to benefit features learned from other operations. However, it is also possible to disable weight sharing and train separate decoders for each operations, which increases the capacity of the model while does not affect the overall inference time. Early Exit Although it is parameter-efficient to share the same Transformer architecture across the above three heads, there is room for improvement as one decoding iteration requires three full passes of the network. To make trade-off between performance and computational cost, we propose to perform early exit (attaching the classifier to an intermediate block instead of the last one) for πdel and πplh to reduce computation while keeping πtok always based on the last block, considering that token prediction is usually more challenging than the other two tasks. 3.2 Dual-policy Learning Imitation Learning We use imitation learning to train the Levenshtein Transformer. Essentially we let the agent imitate the behaviors that we draw from some expert policy π∗. The expert policy is derived from direct usage of ground-truth targets or less noisy version filtered by sequence distillation (Kim and Rush, 2016). The objective is to maximize the following expectation: Eydel∼dπ̃del d∗∼π∗ ∑ d∗i∈d∗ log πdelθ (d ∗ i |i,ydel)︸ ︷︷ ︸ Deletion Objective +Eyins∼dπ̃ins p∗,t∗∼π∗ ∑ p∗i∈p∗ log πplhθ (p ∗ i |i,yins) + ∑ t∗i∈t∗ log πtokθ (t ∗ i |i,y′ins) ︸ ︷︷ ︸ Insertion Objective , where y′ins is the output after inserting palceholders p ∗ upon yins. π̃del, π̃ins are the roll-in polices and we repeatedly draw states (sequences) from their induced state distribution dπ̃del , dπ̃ins . These states are first executed by the expert policy returning the suggested actions by the expert, and then we maximize the conditional log-likelihood over them. By definition, the roll-in policy determines the state distribution fed to πθ during training. In this work, we have two strategies to construct the roll-in policy — adding noise to the ground-truth or using the output from the adversary policy. Figure 2 shows a diagram of this learning paradigm. We formally write down the roll-in policies as follows. 1. Learning to Delete: we design the π̃del as a stochastic mixture between the initial input y0 or the output by applying insertion from the model with some mixture factor α ∈ [0, 1]: dπ̃del = {y0 if u < α else E ( E (y′,p∗) , t̃ ) , p∗ ∼ π∗, t̃ ∼ πθ} (6) where u ∼ Uniform[0, 1] and y′ is any sequence ready to insert tokens. t̃ is obtained by sampling instead of doing argmax from Eq. (5). 2. Learning to Insert: similar to the deletion step, we apply a mixture of the deletion output and a random word dropping sequence of the round-truth, inspired by recent advances of training masked language model (Devlin et al., 2018). We use random dropping as a form of noise injection to encourage more exploration. Let β ∈ [0, 1] and u ∼ Uniform[0, 1], dπ̃ins = {E ( y0,d∗ ) , d∗ ∼ π∗ if u < β else E ( y∗, d̃ ) , d̃ ∼ πRND} (7) Expert Policy It is crucial to construct an expert policy in imitation learning which cannot be too hard or too weak to learn from. Specifically, we considered two types of experts: 1. Oracle: One way is to build an oracle which accesses to the ground-truth sequence. It returns the optimal actions a∗ (either oracle insertion p∗, t∗ or oracle deletion d∗) by: a∗ = argmin a D(y∗, E(y,a)) (8) Here, we use the Levenshtein distance (Levenshtein, 1965)2 as D considering it is possible to obtain the action suggestions efficiently by dynamic programming. 2. Distillation: We also explore to use another teacher model to provide expert policy, which is known as sequence-level knowledge distillation (Kim and Rush, 2016). This technique has been widely used in previous approaches for nonauoregressive generation (Gu et al., 2018). More precisely, we first train an autoregressive teacher model using the same datasets and then replace the ground-truth sequence y∗ by the beam-search result of this teacher-model, yAR. We use the same mechanism to find the suggested option as using the ground-truth oracle. 3.3 Inference Greedy Decoding At inference time, we apply the trained model over the initial sequence y0 for several iterations. We greedily pick up the actions associated with high probabilities in Eq. (3)(4)(5). Moreover, we find that using search (instead of greedy decoding) or nosiy parallel decoding (Cho, 2016) does not yield much gain in LevT. This observation is quite opposite to what has been widely discovered in autoregressive decoding. We hypothesize there may be two reasons leading to this issue: (i) The local optimal point brought by greedy decoding in autoregressive models is often far from the optimal point globally. Search techniques resolve this issue with tabularization. In our case, however, because LevT inserts or deletes tokens dynamically, it could easily revoke the tokens that are found sub-optimal and re-insert better ones; (ii) the log-probability of LevT is not a good metric to select the best output. However, we do believe to see more improvements if we include an external re-ranker, e.g. an autoregressive teacher model. We leave this discussion in the future work. Termination Condition The decoding stops when one of the following two conditions is fulfilled: 1. Looping: Generation is terminated if two consecutive refinement iterations return the same output which can be (i) there are no words to delete or insert; (ii) the agent gets stuck in an infinite loop: i.e. the insertion and deletion counter each other and keep looping. 2. Timeout: We further set a maximum number of iterations (timeout) to guarantee a constant-time complexity in the worst case (Lee et al., 2018; Ghazvininejad et al., 2019). Penalty for Empty Placeholders Similar to Stern et al. (2019), we add a penalty to insert “empty” placeholder in decoding. Overly inserting “empty” placeholders may result in shorter output. A penalty term γ ∈ [0, 3] is subtracted from the logits of 0 in Eq. (4). 2We only consider the variant which only computes insertion and deletion. No substitution is considered. the inserted tokens in purple and deleted tokens with red strikethrough . 4 Experiments We validate the efficiency, effectiveness, and flexibility of Levenshtein Transformer extensively across three different tasks — machine translation (MT), text summarization (TS) and automatic post-editing (APE) for machine translation, from both generation (§4.1) and refinement (§4.2) perspectives. 4.1 Sequence Generation For the sequence generation perspective, we evaluate LevT model on MT and TS. As a special case, sequence generation assumes empty y0 = <S></S> as input and no initial deletion is applied. Data & Evaluation We use three diversified language pairs for MT experiments: WMT’16 Romanian-English (Ro-En)3, WMT’14 English-German (En-De)4 and WAT2017 Small-NMT English-Japanese (En-Ja, Nakazawa et al., 2017)5. The TS experiments use preprocessed data from the Annotated English Gigaword (Gigaword, Rush et al., 2015)6. We learn byte-pair encoding (BPE, Sennrich et al., 2016) vocabulary on tokenized data. Detailed dataset statistics can be found in the Appendix. For evaluation metrics, we use BLEU (Papineni et al., 2002) for MT and ROUGE-1,2,L (Lin, 2004) for TS. Before computing the BLEU scores for Japanese output, we always segment Japanese words using KyTea 7. Models & Training We adopt the model architecture of Transformer base (Vaswani et al., 2017) for the proposed LevT model and the autoregressive baseline. All the Transformer-based models are 3http://www.statmt.org/wmt16/translation-task.html 4http://www.statmt.org/wmt14/translation-task.html 5http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2017/snmt/index.html 6https://github.com/harvardnlp/sent-summary 7http://www.phontron.com/kytea/ trained on 8 Nvidia Volta GPUs with maximum 300K steps and a total batch-size of around 65, 536 tokens per step (We leave more details to the Appendix). Overall results We present our main results on the generation quality and decoding speed in Table 1. We measure the speed by the averaged generation latency of generating one sequence at a time on single Nvidia V100 GPU. To remove the implementation bias, we also present the number of decoder iterations as a reference. It can be concluded that for both MT and summarization tasks, our proposed LevT achieves comparable and sometimes better generation quality compared to the strong autoregressive baseline, while LevT is much more efficient at decoding. A translation example is shown in Figure 3 and we leave more in Appendix. We conjecture that this is due to that the output of the teacher model possesses fewer modes and much less noisy than the real data. Consequently, LevT needs less number of iterations to converge to this expert policy. Ablation on Efficiency As shown in Figure 4a, we plot the average number of iterations over the length of input over a monolingual corpus. LevT learns to properly adjust the decoding time accordingly. We also explore the variants of “early exit” where we denote LevT(m-n) as a model with m and n blocks for deletion (Eq. (3)) and placeholder prediction (Eq. (4)) respectively. Figure 4b shows that although it compromises the quality a bit, our model with early exit achieves up to ×5 speed-up (execution time) comparing against a strong autoregressive Transformer using beam-search. Ablation on Weight Sharing We also evaluate LevT with different weight sharing as noted in §3.1. The results of models trained with oracle or distillation are listed in Table 2a. We observe that weight-sharing is beneficial especially between the two insertion operations (placeholder and token classifiers). Also, it shows another +0.5 BLEU improvement by not sharing the deletion operation with insertion compared to the default setting, which may indicate that insertion and deletion capture complementary information, requiring larger capacity by learning them separately. Importance of mixture roll-in policy We perform an ablation study on the learning algorithm. Specifically, we train a model with no mixing of the πθ in Equation (6). We name this experiment by DAE due to its resemblance to a denoising autoencoder. We follow closely a standard pipeline established by Lee et al. (2018). Table 2b shows this comparison. As we can see that the deletion loss from DAE is much smaller while the generation BLEU score is inferior. We conjecture that this is caused by the mismatch between the states from the model and the roll-in policy in training the DAE. v.s. Exiting Refinement-based Models Table 2a also includes results from two relevant recent works which also incorporate iterative refinement in non-autoregressive sequence generation. For fair comparison, we use the result with length beam 1 from Ghazvininejad et al. (2019). Although both approaches use similar “denosing” objectives to train the refinement process, our model explicitly learns “insertion” and “deletion” in a dual-policy learning fashion, and outperforms both models. 4.2 Sequence Refinement We evaluate LevT’s capability of refining sequence outputs on the APE task. In this setting, inputs are pairs of the source sequence and a black-box MT system generation. The ground-truth outputs are from real human edits with expansion using synthetic data. Dataset We follow a normal protocol in the synthetic APE experiments (Grangier and Auli, 2017): we first train the input MT system on half of the dataset. Then we will train a refinement model on the other half based on the output produced by the MT model trained in the previous phase. For the real APE tasks, we use the data from WMT17 Automatic Post-Editing Shared Task8 on En-De. It contains both real PE triples and a large-scale synthetic corpus. Models & Evaluation The baseline model is a standard Transformer encoding the concatenation of the source and the MT system’s output. For the MT system here, we want some imperfect systems that need to be refined. We consider a statistical phrase-based MT system (PBMT, Koehn et al., 2003) and an RNN-based NMT system (Bahdanau et al., 2015). Apart from BLEU scores, we additionally apply translation error rate (TER, Snover et al., 2006) as it is widely used in the APE literature. 8http://www.statmt.org/wmt17/ape-task.html Overall results We show the major comparison in Table 3. When training from scratch, LevT consistently improves the performance of the input MT system (either PBMT or NMT). It also achieves better performance than the autoregressive Transformer in most of the cases. Pre-training on MT Thanks to the generality of the LevT model, we show it is feasible to directly apply the LevT model trained by generation onto refinement tasks — in this case — MT and APE. We name this a “zero-shot post-editing” setting. According to Table 3, the pre-trained MT models are always capable of improving the initial MT input in the synthetic tasks. The real APE task, however, differs quite a bit from the synthetic tasks because human translators normally only fix a few spotted errors. This ends up with very high BLEU scores even for the “Do-nothing” column. However, the pre-trained MT model achieves the best results by fine-tuning on the PE data indicating that LevT is able to leverage the knowledge for generation and refinement. Collaborate with Oracle Thanks to the saperation of insertion and deletion operations, LevT has better interpretability and controllability. For example, we test the ability that LevT adapts oracle (e.g. human translators) instructions. As shown in Figure 5, both MT and PE tasks have huge improvement if every step the oracle deletion is given. This goes even further if the oracle provides both the correct deletion and the number of placehoders to insert. It also sheds some light upon computer-assisted text editing for human translators. 5 Related Work Non-Autoregressive and Non-Monotonic Decoding Breaking the autoregressive constraints and monotonic (left-to-right) decoding order in classic neural sequence generation systems has recently attracted much interest. Stern et al. (2018); Wang et al. (2018) designed partially parallel decoding schemes to output multiple tokens at each step. Gu et al. (2018) proposed a non-autoregressive framework using discrete latent variables, which was later adopted in Lee et al. (2018) as iterative refinement process. Ghazvininejad et al. (2019) introduced the masked language modeling objective from BERT (Devlin et al., 2018) to non-autoregressively predict and refine translations. Welleck et al. (2019); Stern et al. (2019); Gu et al. (2019) generate translations non-monotonically by adding words to the left or right of previous ones or by inserting words in arbitrary order to form a sequence. Editing-Based Models Several prior works have explored incorporating “editing” operations for sequence generation tasks. For instance, Novak et al. (2016) predict and apply token substitutions iteratively on phase-based MT system outputs using convolutional neural network. QuickEdit (Grangier and Auli, 2017) and deliberation network (Xia et al., 2017) both consist of two autoregressive decoders where the second decoder refines the translation generated by the first decoder. Guu et al. (2018) propose a neural editor which learned language modeling by first retrieving a prototype and then editing over that. Freitag et al. (2019) correct patterned errors in MT system outputs using transformer models trained on monolingual data. Additionally, the use of Levenshtein distance with dynamic programming as the oracle policy were also proposed in Sabour et al. (2018); Dong et al. (2019). Different from these work, the proposed model learns a non-autoregressive model which simultaneously inserts and deletes multiple tokens iteratively. 6 Conclusion We propose Levenshtein Transformer, a neural sequence generation model based on insertion and deletion. The resulted model achieves performance and decoding efficiency, and embraces sequence generation to refinement in one model. The insertion and deletion operations are arguably more similar to how human writes or edits text. For future work, it is potential to extend this model to human-in-the-loop generation. Acknowledgement We would like to thank Kyunghyun Cho, Marc’Aurelio Ranzato, Douwe Kiela, Qi Liu and our colleagues at Facebook AI Research for valuable feedback, discussions and technical assistance.
1. What is the novel approach introduced in the paper for sequence generation? 2. What are the strengths of the proposed model and training procedure? 3. What are the improvements achieved by the proposed method compared to the state-of-the-art? 4. Are there any minor questions or concerns regarding technical details in the paper? 5. How does the reviewer assess the significance and potential impact of the work on future research?
Review
Review Originality: It is an interesting work by casting the sequence generation task as two iterative tasks of insertion/deletion. I think the formulation is new that is coupled with the training procedure based on imitation learning with two policies, i.e., deletion and insertion. Quality: The proposed model and its training procedure seem apt and well designed. Experiments are carried out carefully with consistent gains when compared with SOTA, i.e., Transformer, with faster inference speed. Clarity: This paper is clearly written, though I have a couple of minor questions regarding technical details. See the details in "Improvements" section. Significance: Given the inference efficiency and its reasonable quality improvements, I feel this work might have potential to impact future research. Other comment: line 89: we our policy for one iteration is -> {our, the}(?) policy for ...
NIPS
Title Levenshtein Transformer Abstract Modern neural sequence generation models are built to either generate tokens step-by-step from scratch or (iteratively) modify a sequence of tokens bounded by a fixed length. In this work, we develop Levenshtein Transformer, a new partially autoregressive model devised for more flexible and amenable sequence generation. Unlike previous approaches, the basic operations of our model are insertion and deletion. The combination of them facilitates not only generation but also sequence refinement allowing dynamic length changes. We also propose a set of new training techniques dedicated at them, effectively exploiting one as the other’s learning signal thanks to their complementary nature. Experiments applying the proposed model achieve comparable or even better performance with much-improved efficiency on both generation (e.g. machine translation, text summarization) and refinement tasks (e.g. automatic post-editing). We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing. 1 1 Introduction Neural sequence generation models are widely developed and deployed in tasks such as machine translation (Bahdanau et al., 2015; Vaswani et al., 2017). As we examine the current frameworks, the most popular autoregressive models generate tokens step-by-step. If not better, recent nonautoregressive approaches (Gu et al., 2018; Kaiser et al., 2018; Lee et al., 2018) have proved it possible to perform generation within a much smaller number of decoding iterations. In this paper, we propose Levenshtein Transformer (LevT), aiming to address the lack of flexibility of the current decoding models. Notably, in the existing frameworks, the length of generated sequences is either fixed or monotonically increased as the decoding proceeds. This remains incompatible with human-level intelligence where humans can revise, replace, revoke or delete any part of their generated text. Hence, LevT is proposed to bridge this gap by breaking the in-so-far standardized decoding mechanism and replacing it with two basic operations — insertion and deletion. We train the LevT using imitation learning. The resulted model contains two policies and they are executed in an alternate manner. Empirically, we show that LevT achieves comparable or better results than a standard Transformer model on machine translation and summarization, while maintaining the efficiency advantages benefited from parallel decoding similarly to (Lee et al., 2018). With this model, we argue that the decoding becomes more flexible. For example, when the decoder is given an empty token, it falls back to a normal sequence generation model. On the other hand, the decoder acts as a refinement model when the initial state is a low-quality generated sequence. Indeed, we show that a LevT trained from machine translation is directly applicable to translation post-editing without 1Codes for reproducing this paper are released in https://github.com/pytorch/fairseq/tree/ master/examples/nonautoregressive_translation 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. any change. This would not be possible with any framework in the literature because generation and refinement are treated as two different tasks due to the model’s inductive bias. One crucial component in LevT framework is the learning algorithm. We leverage the characteristics of insertion and deletion — they are complementary but also adversarial. The algorithm we propose is called “dual policy learning”. The idea is that when training one policy (insertion or deletion), we use the output from its adversary at the previous iteration as input. An expert policy, on the other hand, is drawn to provide a correction signal. Despite that, in theory, this learning algorithm is applicable to other imitation learning scenarios where a dual adversarial policy exists, in this work we primarily focus on a proof-of-concept of this algorithm landing at training the proposed LevT model. To this end, we summarize the contributions as follows: • We propose Levenshtein Transformer (LevT), a new sequence generation model composed of the insertion and deletion operations. This model achieves comparable or even better results than a strong Transformer baseline in both machine translation and text summarization, but with much better efficiency (up to ×5 speed-up in terms of actual machine execution time); • We propose a corresponding learning algorithm under the theoretical framework of imitation learning, tackling the complementary and adversarial nature of the dual policies; • We recognize our model as a pioneer attempt to unify sequence generation and refinement, thanks to its built-in flexibility. With this unification, we empirically validate the feasibility of applying a LevT model trained by machine translation directly to translation post-editing, without any change. 2 Problem Formulation 2.1 Sequence Generation and Refinement We unify the general problems of sequence generation and refinement by casting them to a Markov Decision Process (MDP) defined by a tuple (Y,A, E ,R,y0). We consider the setup consisting an agent interacting with an environment E which receives the agent’s editing actions and returns the modified sequence. We define Y = VNmax as a set of discrete sequences up to length Nmax where V is a vocabulary of symbols. At every decoding iteration, the agent receives an input y drawn from scratch or uncompleted generation, chooses an action a and gets a reward r. We use A to denote the set of actions and R for the reward function. Generally the reward function R measures the distance between the generation and the ground-truth sequence,R(y) = −D(y,y∗) which can be any distance measurement such as the Levenshtein distance (Levenshtein, 1965). It is crucial to incorporate y0 ∈ Y into the our formulation. As the initial sequence, the agent receives—when y0 is an already generated sequence from another system, the agent essentially learns to do refinement while it falls back to generation if y0 is an empty sequence. The agent is modeled by a policy, π, that maps the current generation over a probability distribution over A. That is, π : Y → P (A). 2.2 Actions: Deletion & Insertion Following the above MDP formulation, with a subsequence yk = (y1, y2, ..., yn), the two basic actions – deletion and insertion – are called to generate yk+1 = E(yk,ak+1). Here we let y1 and yn be special symbols <s> and </s>, respectively. Since we mainly focus on the policy of a single round generation, the superscripts are omitted in this section for simplicity. For conditional generation like MT, our policy also includes an input of source information x which is also omitted here. Deletion The deletion policy reads the input sequence y, and for every token yi ∈ y, the deletion policy πdel(d|i,y) makes a binary decision which is 1 (delete this token) or 0 (keep it). We additionally constrain πdel(0|1,y) = πdel(0|n,y) = 1 to avoid sequence boundary being broken. The deletion classifier can also be seen as a fine-grained discriminator used in GAN (Goodfellow et al., 2014) where we predict “fake” or “real” labels for every predicted token. Insertion In this work, it is slightly more complex to build the insertion atomic because it involves two phases: placeholder prediction and token prediction so that it is able to insert multiple tokens at the same slot. First, among all the possible inserted slots (yi, yi+1) in y, πplh(p|i,y) predicts the possibility of adding one or several placeholders. In what follows, for every placeholder predicted as above, a token prediction policy πtok(t|i,y) replaces the placeholders with actual tokens in the vocabulary. The two-stage insertion process can also be viewed as a hybrid of Insertion Transformer (Stern et al., 2019) and masked language model (MLM, Devlin et al., 2018; Ghazvininejad et al., 2019). Policy combination Recall that our two operations are complementary. Hence we combine them in an alternate fashion. For example in sequence generation from the empty, insertion policy is first called and it is followed by deletion, and then repeat till the certain stopping condition is fulfilled. Indeed, it is possible to leverage the parallelism in this combination. We essentially decompose one iteration of our sequence generator into three phases: “delete tokens – insert placeholders – replace placeholders with new tokens”. Within each stage, all operations are performed in parallel. More precisely, given the current sequence y = (y0, . . . , yn), and suppose the action to predict is a = {d0, . . . dn︸ ︷︷ ︸ d ; p0, . . . , pn−1︸ ︷︷ ︸ p ; t10, . . . t p0 0 , . . . , t pn−1 n−1︸ ︷︷ ︸ t }, the policy for one iteration is: π(a|y) = ∏ di∈d πdel(di|i,y) · ∏ pi∈p πplh(pi|i,y′) · ∏ ti∈t πtok(ti|i,y′′), (1) where y′ = E(y,d) and y′′ = E(y′,p). We parallelize the computation within each sub-tasks. 3 Levenshtein Transformer In this section, we cover the specs of Levenshtein Transformer and the dual-policy learning algorithm. Overall our model takes a sequence of tokens (or none) as the input then iteratively modify it by alternating between insertion and deletion, until the two policies combined converge. We describe the detailed learning and inference algorithms in the Appendix. 3.1 Model We use Transformer (Vaswani et al., 2017) as the basic building block. For conditional generation, the source x is included in each TransformerBlock. The states from the l-th block are: h (l+1) 0 ,h (l+1) 1 , ...,h (l+1) n = { Ey0 + P0, Ey1 + P1, ..., Eyn + Pn, l = 0 TransformerBlockl(h (l) 0 ,h (l) 1 , ...,h (l) n ), l > 0 (2) where E ∈ R|V|×dmodel and P ∈ RNmax×dmodel are the token and position embeddings, respectively. We show the illustration of the proposed LevT model for one refinement (delete, insert) as Figure 1. Policy Classifiers The decoder outputs (h0,h2, ...,hn) are passed to three policy classifiers: 1. Deletion Classifier: LevT scans over the input tokens (except for the boundaries) and predict “deleted” (0) or “kept” (1) for each token position, πdelθ (d|i,y) = softmax ( hi ·A> ) , i = 1, . . . n− 1, (3) where A ∈ R2×dmodel , and we always keep the boundary tokens. 2. Placeholder Classifier: LevT predicts the number of tokens to be inserted at every consecutive position pairs, by casting the representation to a categorical distribution: πplhθ (p|i,y) = softmax ( concat(hi,hi+1) ·B> ) , i = 0, . . . n− 1, (4) where B ∈ R(Kmax+1)×(2dmodel). Based on the number (0 ∼ Kmax) of tokens it predicts, we insert the considered number of placeholders at the current position. In our implementation, placehoder is represented by a special token <PLH> which was reserved in the vocabulary. 3. Token Classifier: following the placeholder prediction, LevT needs to fill in tokens replacing all the placeholders. This is achieved by training a token predictor as follow: πtokθ (t|i,y) = softmax ( hi · C> ) , ∀yi = <PLH>, (5) where C ∈ R|V|×dmodel with parameters being shared with the embedding matrix. Weight Sharing Our default implementation always assumes the three operations to share the same Transformer backbone to benefit features learned from other operations. However, it is also possible to disable weight sharing and train separate decoders for each operations, which increases the capacity of the model while does not affect the overall inference time. Early Exit Although it is parameter-efficient to share the same Transformer architecture across the above three heads, there is room for improvement as one decoding iteration requires three full passes of the network. To make trade-off between performance and computational cost, we propose to perform early exit (attaching the classifier to an intermediate block instead of the last one) for πdel and πplh to reduce computation while keeping πtok always based on the last block, considering that token prediction is usually more challenging than the other two tasks. 3.2 Dual-policy Learning Imitation Learning We use imitation learning to train the Levenshtein Transformer. Essentially we let the agent imitate the behaviors that we draw from some expert policy π∗. The expert policy is derived from direct usage of ground-truth targets or less noisy version filtered by sequence distillation (Kim and Rush, 2016). The objective is to maximize the following expectation: Eydel∼dπ̃del d∗∼π∗ ∑ d∗i∈d∗ log πdelθ (d ∗ i |i,ydel)︸ ︷︷ ︸ Deletion Objective +Eyins∼dπ̃ins p∗,t∗∼π∗ ∑ p∗i∈p∗ log πplhθ (p ∗ i |i,yins) + ∑ t∗i∈t∗ log πtokθ (t ∗ i |i,y′ins) ︸ ︷︷ ︸ Insertion Objective , where y′ins is the output after inserting palceholders p ∗ upon yins. π̃del, π̃ins are the roll-in polices and we repeatedly draw states (sequences) from their induced state distribution dπ̃del , dπ̃ins . These states are first executed by the expert policy returning the suggested actions by the expert, and then we maximize the conditional log-likelihood over them. By definition, the roll-in policy determines the state distribution fed to πθ during training. In this work, we have two strategies to construct the roll-in policy — adding noise to the ground-truth or using the output from the adversary policy. Figure 2 shows a diagram of this learning paradigm. We formally write down the roll-in policies as follows. 1. Learning to Delete: we design the π̃del as a stochastic mixture between the initial input y0 or the output by applying insertion from the model with some mixture factor α ∈ [0, 1]: dπ̃del = {y0 if u < α else E ( E (y′,p∗) , t̃ ) , p∗ ∼ π∗, t̃ ∼ πθ} (6) where u ∼ Uniform[0, 1] and y′ is any sequence ready to insert tokens. t̃ is obtained by sampling instead of doing argmax from Eq. (5). 2. Learning to Insert: similar to the deletion step, we apply a mixture of the deletion output and a random word dropping sequence of the round-truth, inspired by recent advances of training masked language model (Devlin et al., 2018). We use random dropping as a form of noise injection to encourage more exploration. Let β ∈ [0, 1] and u ∼ Uniform[0, 1], dπ̃ins = {E ( y0,d∗ ) , d∗ ∼ π∗ if u < β else E ( y∗, d̃ ) , d̃ ∼ πRND} (7) Expert Policy It is crucial to construct an expert policy in imitation learning which cannot be too hard or too weak to learn from. Specifically, we considered two types of experts: 1. Oracle: One way is to build an oracle which accesses to the ground-truth sequence. It returns the optimal actions a∗ (either oracle insertion p∗, t∗ or oracle deletion d∗) by: a∗ = argmin a D(y∗, E(y,a)) (8) Here, we use the Levenshtein distance (Levenshtein, 1965)2 as D considering it is possible to obtain the action suggestions efficiently by dynamic programming. 2. Distillation: We also explore to use another teacher model to provide expert policy, which is known as sequence-level knowledge distillation (Kim and Rush, 2016). This technique has been widely used in previous approaches for nonauoregressive generation (Gu et al., 2018). More precisely, we first train an autoregressive teacher model using the same datasets and then replace the ground-truth sequence y∗ by the beam-search result of this teacher-model, yAR. We use the same mechanism to find the suggested option as using the ground-truth oracle. 3.3 Inference Greedy Decoding At inference time, we apply the trained model over the initial sequence y0 for several iterations. We greedily pick up the actions associated with high probabilities in Eq. (3)(4)(5). Moreover, we find that using search (instead of greedy decoding) or nosiy parallel decoding (Cho, 2016) does not yield much gain in LevT. This observation is quite opposite to what has been widely discovered in autoregressive decoding. We hypothesize there may be two reasons leading to this issue: (i) The local optimal point brought by greedy decoding in autoregressive models is often far from the optimal point globally. Search techniques resolve this issue with tabularization. In our case, however, because LevT inserts or deletes tokens dynamically, it could easily revoke the tokens that are found sub-optimal and re-insert better ones; (ii) the log-probability of LevT is not a good metric to select the best output. However, we do believe to see more improvements if we include an external re-ranker, e.g. an autoregressive teacher model. We leave this discussion in the future work. Termination Condition The decoding stops when one of the following two conditions is fulfilled: 1. Looping: Generation is terminated if two consecutive refinement iterations return the same output which can be (i) there are no words to delete or insert; (ii) the agent gets stuck in an infinite loop: i.e. the insertion and deletion counter each other and keep looping. 2. Timeout: We further set a maximum number of iterations (timeout) to guarantee a constant-time complexity in the worst case (Lee et al., 2018; Ghazvininejad et al., 2019). Penalty for Empty Placeholders Similar to Stern et al. (2019), we add a penalty to insert “empty” placeholder in decoding. Overly inserting “empty” placeholders may result in shorter output. A penalty term γ ∈ [0, 3] is subtracted from the logits of 0 in Eq. (4). 2We only consider the variant which only computes insertion and deletion. No substitution is considered. the inserted tokens in purple and deleted tokens with red strikethrough . 4 Experiments We validate the efficiency, effectiveness, and flexibility of Levenshtein Transformer extensively across three different tasks — machine translation (MT), text summarization (TS) and automatic post-editing (APE) for machine translation, from both generation (§4.1) and refinement (§4.2) perspectives. 4.1 Sequence Generation For the sequence generation perspective, we evaluate LevT model on MT and TS. As a special case, sequence generation assumes empty y0 = <S></S> as input and no initial deletion is applied. Data & Evaluation We use three diversified language pairs for MT experiments: WMT’16 Romanian-English (Ro-En)3, WMT’14 English-German (En-De)4 and WAT2017 Small-NMT English-Japanese (En-Ja, Nakazawa et al., 2017)5. The TS experiments use preprocessed data from the Annotated English Gigaword (Gigaword, Rush et al., 2015)6. We learn byte-pair encoding (BPE, Sennrich et al., 2016) vocabulary on tokenized data. Detailed dataset statistics can be found in the Appendix. For evaluation metrics, we use BLEU (Papineni et al., 2002) for MT and ROUGE-1,2,L (Lin, 2004) for TS. Before computing the BLEU scores for Japanese output, we always segment Japanese words using KyTea 7. Models & Training We adopt the model architecture of Transformer base (Vaswani et al., 2017) for the proposed LevT model and the autoregressive baseline. All the Transformer-based models are 3http://www.statmt.org/wmt16/translation-task.html 4http://www.statmt.org/wmt14/translation-task.html 5http://lotus.kuee.kyoto-u.ac.jp/WAT/WAT2017/snmt/index.html 6https://github.com/harvardnlp/sent-summary 7http://www.phontron.com/kytea/ trained on 8 Nvidia Volta GPUs with maximum 300K steps and a total batch-size of around 65, 536 tokens per step (We leave more details to the Appendix). Overall results We present our main results on the generation quality and decoding speed in Table 1. We measure the speed by the averaged generation latency of generating one sequence at a time on single Nvidia V100 GPU. To remove the implementation bias, we also present the number of decoder iterations as a reference. It can be concluded that for both MT and summarization tasks, our proposed LevT achieves comparable and sometimes better generation quality compared to the strong autoregressive baseline, while LevT is much more efficient at decoding. A translation example is shown in Figure 3 and we leave more in Appendix. We conjecture that this is due to that the output of the teacher model possesses fewer modes and much less noisy than the real data. Consequently, LevT needs less number of iterations to converge to this expert policy. Ablation on Efficiency As shown in Figure 4a, we plot the average number of iterations over the length of input over a monolingual corpus. LevT learns to properly adjust the decoding time accordingly. We also explore the variants of “early exit” where we denote LevT(m-n) as a model with m and n blocks for deletion (Eq. (3)) and placeholder prediction (Eq. (4)) respectively. Figure 4b shows that although it compromises the quality a bit, our model with early exit achieves up to ×5 speed-up (execution time) comparing against a strong autoregressive Transformer using beam-search. Ablation on Weight Sharing We also evaluate LevT with different weight sharing as noted in §3.1. The results of models trained with oracle or distillation are listed in Table 2a. We observe that weight-sharing is beneficial especially between the two insertion operations (placeholder and token classifiers). Also, it shows another +0.5 BLEU improvement by not sharing the deletion operation with insertion compared to the default setting, which may indicate that insertion and deletion capture complementary information, requiring larger capacity by learning them separately. Importance of mixture roll-in policy We perform an ablation study on the learning algorithm. Specifically, we train a model with no mixing of the πθ in Equation (6). We name this experiment by DAE due to its resemblance to a denoising autoencoder. We follow closely a standard pipeline established by Lee et al. (2018). Table 2b shows this comparison. As we can see that the deletion loss from DAE is much smaller while the generation BLEU score is inferior. We conjecture that this is caused by the mismatch between the states from the model and the roll-in policy in training the DAE. v.s. Exiting Refinement-based Models Table 2a also includes results from two relevant recent works which also incorporate iterative refinement in non-autoregressive sequence generation. For fair comparison, we use the result with length beam 1 from Ghazvininejad et al. (2019). Although both approaches use similar “denosing” objectives to train the refinement process, our model explicitly learns “insertion” and “deletion” in a dual-policy learning fashion, and outperforms both models. 4.2 Sequence Refinement We evaluate LevT’s capability of refining sequence outputs on the APE task. In this setting, inputs are pairs of the source sequence and a black-box MT system generation. The ground-truth outputs are from real human edits with expansion using synthetic data. Dataset We follow a normal protocol in the synthetic APE experiments (Grangier and Auli, 2017): we first train the input MT system on half of the dataset. Then we will train a refinement model on the other half based on the output produced by the MT model trained in the previous phase. For the real APE tasks, we use the data from WMT17 Automatic Post-Editing Shared Task8 on En-De. It contains both real PE triples and a large-scale synthetic corpus. Models & Evaluation The baseline model is a standard Transformer encoding the concatenation of the source and the MT system’s output. For the MT system here, we want some imperfect systems that need to be refined. We consider a statistical phrase-based MT system (PBMT, Koehn et al., 2003) and an RNN-based NMT system (Bahdanau et al., 2015). Apart from BLEU scores, we additionally apply translation error rate (TER, Snover et al., 2006) as it is widely used in the APE literature. 8http://www.statmt.org/wmt17/ape-task.html Overall results We show the major comparison in Table 3. When training from scratch, LevT consistently improves the performance of the input MT system (either PBMT or NMT). It also achieves better performance than the autoregressive Transformer in most of the cases. Pre-training on MT Thanks to the generality of the LevT model, we show it is feasible to directly apply the LevT model trained by generation onto refinement tasks — in this case — MT and APE. We name this a “zero-shot post-editing” setting. According to Table 3, the pre-trained MT models are always capable of improving the initial MT input in the synthetic tasks. The real APE task, however, differs quite a bit from the synthetic tasks because human translators normally only fix a few spotted errors. This ends up with very high BLEU scores even for the “Do-nothing” column. However, the pre-trained MT model achieves the best results by fine-tuning on the PE data indicating that LevT is able to leverage the knowledge for generation and refinement. Collaborate with Oracle Thanks to the saperation of insertion and deletion operations, LevT has better interpretability and controllability. For example, we test the ability that LevT adapts oracle (e.g. human translators) instructions. As shown in Figure 5, both MT and PE tasks have huge improvement if every step the oracle deletion is given. This goes even further if the oracle provides both the correct deletion and the number of placehoders to insert. It also sheds some light upon computer-assisted text editing for human translators. 5 Related Work Non-Autoregressive and Non-Monotonic Decoding Breaking the autoregressive constraints and monotonic (left-to-right) decoding order in classic neural sequence generation systems has recently attracted much interest. Stern et al. (2018); Wang et al. (2018) designed partially parallel decoding schemes to output multiple tokens at each step. Gu et al. (2018) proposed a non-autoregressive framework using discrete latent variables, which was later adopted in Lee et al. (2018) as iterative refinement process. Ghazvininejad et al. (2019) introduced the masked language modeling objective from BERT (Devlin et al., 2018) to non-autoregressively predict and refine translations. Welleck et al. (2019); Stern et al. (2019); Gu et al. (2019) generate translations non-monotonically by adding words to the left or right of previous ones or by inserting words in arbitrary order to form a sequence. Editing-Based Models Several prior works have explored incorporating “editing” operations for sequence generation tasks. For instance, Novak et al. (2016) predict and apply token substitutions iteratively on phase-based MT system outputs using convolutional neural network. QuickEdit (Grangier and Auli, 2017) and deliberation network (Xia et al., 2017) both consist of two autoregressive decoders where the second decoder refines the translation generated by the first decoder. Guu et al. (2018) propose a neural editor which learned language modeling by first retrieving a prototype and then editing over that. Freitag et al. (2019) correct patterned errors in MT system outputs using transformer models trained on monolingual data. Additionally, the use of Levenshtein distance with dynamic programming as the oracle policy were also proposed in Sabour et al. (2018); Dong et al. (2019). Different from these work, the proposed model learns a non-autoregressive model which simultaneously inserts and deletes multiple tokens iteratively. 6 Conclusion We propose Levenshtein Transformer, a neural sequence generation model based on insertion and deletion. The resulted model achieves performance and decoding efficiency, and embraces sequence generation to refinement in one model. The insertion and deletion operations are arguably more similar to how human writes or edits text. For future work, it is potential to extend this model to human-in-the-loop generation. Acknowledgement We would like to thank Kyunghyun Cho, Marc’Aurelio Ranzato, Douwe Kiela, Qi Liu and our colleagues at Facebook AI Research for valuable feedback, discussions and technical assistance.
1. What are the strengths and weaknesses of the proposed method in the paper? 2. How does the reviewer assess the novelty and contribution of the paper regarding its theoretical analysis and experimental results? 3. Are there any questions or concerns regarding the comparisons made in the paper with other works, particularly in terms of speed and iteration efficiency? 4. How could the paper improve its exposition and writing quality? 5. Does the reviewer have any suggestions for improving the paper's presentation of its ideas and results?
Review
Review === Detailed Comments === > "two atomic operations — insertion and deletion" This is somewhat debatable. Under the LevT, an insertion operation first requires the number of slots to be predicted first, then the actual insertions are predicted. This is not completely atomic. i.e., using the authors terminology from Figure 1, "Insert Placeholders" then "Fill-in Tokens". > Section 1. "(up to ×5 speed-up" > Figure 4. > Section 4.1 "Analysis of Efficiency" This reviewer thinks the paper is quite misleading in the speed comparison, and iteration comparison. Figure 4 add/subtracts a U[0, 0.5) noise to the figure, which means it can subtract iterations -- this gives a misleading plot. Figure 4, and other iteration analysis is also misleading because the authors fail to take into account that 1 LevT iteration is 3 times more expensive than a standard transformer iteration (i.e., compared to other published methods). > Section 3.1 Roll-in Policy It took me several parses to fully understand the roll-in policy. This section can be rewritten to be more clear and easier to understand. > Section 3.2 Expert Policy and Section 4 "Oracle vs. Teacher Model" The terminology is confusing -- please use the standard terminology in the field -- this is simply Distillation vs no-Distillation. The Oracle and Teacher Model terminology is confusing. Additionally, the use of Levenshtein edit distance (and more specifically, decomposing it with dynamic programming, and using it as the oracle policy) is not new. Citations are missing [1, 2]. > Section 3.3 Comment: It seems like your model might benefit from a noisy-decoding approach. i.e., greedy decode with some noise, and select best one based off of log-prob of entire sequence. > Section 4 Machine Translation The authors presented several MT results, Ro-En, En-De, En-Ja -- this reviewer will focus on the WMT14 ende results. This is because WMT14 en-de is a well established benchmark for machine translation, and the other datasets are much less well established and lack strong prior work -- i.e., the other datasets are more interesting towards the NLP community and less so for the Machine Learning community. First, the reviewer thank the authors for published WMT14 ende, instead of taking the easy way out and only publishing on less competitive MT datasets. However, the empirical results are misleading. i.e., Table 1. The authors fail to compare to other prior published work while making bold claims on their own work. The Transformer baseline is very poor (26.X BLEU) -- it is behind the original Transformer paper [1] of 27.3 BLEU -- which in return is behind modern Transformer implementations which can reach >28 BLEU quite easily. > Section 4 Gigagword Similar to the MT results, citation and comparison to other published work is missing. For example, citing a few of the SOTA prior work from [4] would be nice. Overall, this reviewer argues for acceptance this paper. The ideas in the paper are sufficiently novel and is a good contribution to the community. The empirical results are lacking, but that should not be grounds for rejection. However, this reviewer also find the paper to be quite misleading in several places, especially in comparison with prior work and with a few citations missing. The writing and exposition of the paper can be improved. There are also many minor grammatical errors in the text, the text feels very rushed and definitely needs significant polishing. The exposition of the paper is definitely below the NeurIPS bar. However, assuming these issues are addressed in the rebuttal, this reviewer believes this paper should be accepted and will be willing to bump up the score from 6->7. [1] Optimal Completion Distillation for Sequence Learning [2] EditNTS: An Neural Programmer-Interpreter Model for Sentence Simplification through Explicit Editing [3] Attention Is All You Need [4] http://nlpprogress.com/english/summarization.html
NIPS
Title Efficient and Effective Optimal Transport-Based Biclustering Abstract Bipartite graphs can be used to model a wide variety of dyadic information such as user-rating, document-term, and gene-disorder pairs. Biclustering is an extension of clustering to the underlying bipartite graph induced from this kind of data. In this paper, we leverage optimal transport (OT) which has gained momentum in the machine learning community to propose a novel and scalable biclustering model that generalizes several classical biclustering approaches. We perform extensive experimentation to show the validity of our approach compared to other OT biclustering algorithms along both dimensions of the dyadic datasets. 1 Introduction Let G = (U, V,E) be a bipartite graph, which is a graph whose vertices can be divided into two disjoint sets U = {1, 2, . . . , |U |} with |U | = n, V = {1, 2, . . . , |V |} with |V | = d and the set of edges E where each edge connects a vertex of U to a vertex of V . The adjacency matrix for this type of graph has the following structure A = ( 0n×n B B⊤ 0d×d ) (1) where B of size n× d is called the biadjacency matrix of G, its rows and columns corresponding to the two sets of vertices; each entry represents an edge between a row and a column. Biclustering (or Co-clustering) is the extention of clustering to this type of graph. Following [21], several biclustering models have attempted to solve the problem by viewing B as a two-mode matrix and searching for a simultaneous partition of its rows and columns [9]. In this way, biclustering seeks to reveal subsets of U which exhibit a similar behaviour across a subset of V in matrix B. Biclustering has been used in a number of different contexts. [12] used microarray data to find relations between genes and conditions, finding that genes with similar functions often cluster together. [20] applied this paradigm to data from the US Food and Drug Administration reporting system in order to identify groups of drugs with adverse effects. [11] used it to find market segments among tourists so as to enable more effective targeted marketing. There have been various other applications [9, 33, 19]. Several solutions to the biclustering problem have been proposed in the literature (see [17]). [10] used an information-theoretic approach to solve the problem by minimizing the difference in mutual 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information between B and a summary matrix; they implicitly assume that the data points are generated from a Poisson latent block model [18]. [3] adapted classical modularity to bipartite networks and then used it to identify modules within them. [35] proposed a biclustering paradigm based on nonnegative matrix tri-factorization of the biadjacency matrix. Recently, Optimal Transport (OT) has taken the machine learning community by storm. OT has helped to solve a variety of data mining problems, and biclustering is no exception. [25] proposed two models for biclustering: a first model, CCOT, which does co-clustering based on the scaling vectors obtained by applying the Sinkhorn-Knopp algorithm on a square subsampled version of matrix B, and a second model, CCOT-GW, which uses scaling vectors obtained by computing entropic Gromov-Wasserstein barycenters, and which does not require subsampling. Then came [34], where the authors did biclustering by minimizing a new metric, COOT, which generalizes the Gromov-Wasserstein distance between B and a summary matrix, similarly to what was done in [10]. More specifically, they proposed two new metrics: COOT, together with an entropically regularized metric COOTλ. However, both [25] and [34] have certain drawbacks. First, both algorithms do not tackle the biclustering from the beginning; the co-clusters are deduced at the convergence. Thereby biclustering is a consequence and not a main goal. Secondly, they suffer from high computational complexity; CCOT and CCOT-GW also consume large amounts of memory. Finally, we will see that these algorithms are not suited to dyadic sparse data. In this paper, while integrating the biclustering objective from the beginning, we propose a generic framework for biclustering through optimal transport, which generalizes some previous biclustering approaches. We propose two efficient methods for solving this problem: one that gives an almost hard biclustering, and another that gives a fuzzy or soft biclustering through entropic regularization. These methods outperform other optimal transport biclustering models, in terms of both document and term clustering, on several regular and large scale datasets, while being more computationally and memory efficient. We emphasize once again that the approach we propose is specifically tailored to datasets consisting of dyadic data. 2 Methodology Notations. In what follows, ∆n = {p ∈ Rn+| ∑n i=1 pi = 1} denotes the n-dimensional standard simplex. Π(w,v) = {Z ∈ Rn×k+ |Z1 = w,Z⊤1 = v} denotes the transportation polytope, where w ∈ ∆n and v ∈ ∆k are the marginals of the joint distribution Z and 1n is a vector of ones. Matrices are denoted with uppercase boldface letters, and vectors with lowercase boldface letters. For a matrix M, its i-th row is mi and its j-th column is m′j We have that ∥.∥0 is the 0-norm which returns the number of nonzero elements of its argument. 2.1 Preliminaries We first need to introduce exact discrete OT and its entropically regularized counterpart, and show how biclustering can be posed as an integer program. Discrete OT as a linear program. The goal of discrete optimal transport is to find a minimal cost transport plan between a source probability distribution w and a target distribution v. Here we are interested in the discrete case of the Kantorovich formulation of OT, that is OT(M,w,v) ≜ min Z∈Π(w,v) ⟨M,Z⟩ (2) where M ∈ Rn×k is the cost matrix, and mij quantifies the effort needed to transport a probability mass from wi to vj . Discrete entropy regularized OT. It has been suggested in the literature [6, 5] that the use of a regularization such as entropic regularization can lead to better computational and statistical efficiency. OTλ(M,w,v) ≜ min Z∈Π(w,v) ⟨M,Z⟩ − λH (Z) (3) where H is the entropy defined as H(Z) ≜ − ∑ i,j zij log zij and λ controls the strength of regularization. The computational efficiency comes from the fact that the unique solution of this problem is of the structure Z := diag(a) exp(−M/λ)diag(b), a rescaled elementwise negative exponential of the cost M, where a and b are scaling vectors. These vectors can be found efficiently using the Sinkhorn-Knopp algorithm. Biclustering as an integer program. The Block seriation problem [27] consists in finding two permutation matrices, one for the rows and one for the columns s.t. dense blocks appear along the diagonal of the permuted matrix. A possible definition of the block seriation problem is as follows: given a matrix B ∈ Rn×d s.t bij gives the strength of the association between row i and column j (such as in the case of a biadjacency matrix, for example), we have max C ∑ i,j bijcij (4) subject to ∀ i, j cij ∈ {0, 1} ∀ j ∑ i cij ≥ 1 ∀ i ∑ j cij ≥ 1 ∀ i, j, i′, j′ cij + cij′ + ci′j′ − ci′j ≤ 2 ci′j′ + ci′j + cij − cij′ ≤ 2 ci′j + cij + cij′ − ci′j′ ≤ 2 cij′ + ci′j′ + ci′j − cij ≤ 2 A solution C is a block diagonal matrix up to a permutation of its rows and columns. The block seriation problem is an integer programming problem that is NP-hard. One approach for solving this problem uses a simplified version where a rank constraint rank(C) ≤ k is added for k the number of desired biclusters. Integrating this constraint into (4), we can define a new problem by low-rank factorization of C, i.e. C = ZW⊤, which we formulate as max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h bijzihwjh (5) where Γ(n, k) = {Z ∈ {0, 1}n×k | Z1 = 1} is the set of hard partitions of dimension n × k. A simple heuristic for solving this problem involves alternatingly solving for Z given W, and vice-versa, using classical clustering algorithms, before identifying biclusters through the rearranged matrix C, which displays a block diagonal structure, as shown in figure 1a. The biclusters are identified by grouping together the rows and columns that form a block along the diagonal. 2.2 Biclustering using Optimal Transport Here we propose a new biclustering problem based on block seriation and optimal transport. For this purpose we first define what we term an anti-adjacency matrix. Note that a similar concept has been discussed in [36]. Definition 1 (Anti-adjacency matrix) Given a graph characterized by an adjacency matrix A, we have a corresponding anti-adjacency matrix A s.t. aij quantifies the discrepancy between nodes i and j. We consider a bipartite graph characterized by its biadjacency matrix B = (bij) ∈ Rn×d. The rows of B are endowed with weights w ∈ ∆n and its columns with weights v ∈ ∆d. We also consider a row exemplar distribution r ∈ ∆r and a column exemplar distribution c ∈ ∆c. Depending on the availability of a priori information about the data, these weight vectors can be set to uniform distributions. Now let its anti-biadjacency matrix be B = L(B), where L : Rn×d → Rn×d means that bij , the association between node i and node j, is transformed into a discrepancy measure L(B)ij . Thus, we define the optimal transport block seriation problem as the following bilinear program BCOT(w,v, r, c) ≜ min Z∈Π(w,r) W∈Π(v,c) ∑ i,j,k L(B)ijzikwjk ≡ min Z∈Π(w,r) W∈Π(v,c) 〈 L(B),ZW⊤ 〉 (6) where Z is a transport plan (or coupling) between between the row distribution w and the row exemplar distribution r, and similarly for W w.r.t. the column distribution v and the column exemplar distribution c. Inducing a biclustering via BCOT. We will now show how to obtain a partition of the rows and the columns given a solution pair (Z,W). In what follows our aim is to identify an almost-hard clustering couple for rows and columns from the couplings Z and W. Definition 2 (h-almost hard clustering) We define an h-almost hard clustering as a clustering whose assignment matrix is C ∈ Rn×k s.t. ∥C∥0 = n + h and for each row c of C we have that ∥c∥0 > 0. When h = 0, we obtain a standard hard clustering with one non-zero element per row. Proposition 1 1 For w, v, r and c containing no zeros, there exists an optimal pair of coupling matrices Z and W that are h-almost hard clusterings with h ∈ {0, . . . , k − 1}. Furthermore, when n = k (resp. d = k) and w = r (resp. v = c), this Z (resp. W) becomes a hard clustering, i.e., Z ∈ Γ(n, n) (resp. W ∈ Γ(d, d)). This means that the solutions are already almost a hard partition of the data, since k << n, d. To obtain a final hard clustering in the strict sense, we assign each row (resp. column) to the one corresponding to the row of Z (resp. W) with the largest value. This should not significantly change the structure of the solution. Figure 1b provides an illustration: here we see the block diagonal structure generated by the product of the two coupling matrices C = ZW⊤, with its similarity in appearance to the biclustering produced by the hard block seriation 1a, apart from a few nonzero entries off the block diagonal that are hard to see immediately. Intuition for BCOT. To explain the intuition behind the proposed approach we need to look at how the problem is solved. The optimization procedure as described in algorithm 1 consists in alternating between the computation of an optimal transport plan Z given W and vice versa. As regards solving for Z given W, the problem can be rewritten as BCOT(w,v, r, c) ≡ min Z∈Π(w,r) ⟨L(B)W,Z⟩ . (7) This is an optimal transport problem with L(B)W as the cost matrix. The resulting transport plan Z can be seen as a kind of row cluster assignment matrix: if zih > 0, then row i is assigned to cluster h. The same holds for W, which can be seen as a column cluster assignment matrix. This also means that since L(B) is the dissimilarity between the rows and the columns, then the cost matrix L(B)W represents the dissimilarity between rows and row exemplars (or representatives or centroids). In particular, L(B)iwh is the dissimilarity or cost of probability mass transportation between row i and row cluster exemplar h. The reasoning is the same for the columns and the optimal coupling W. Low-rank optimal transport. Biclustering is the main purpose of the approach we proposed, but there is another interesting use case. Proposition 2 For equal target row and column representative distributions, i.e., r = c, and containing no zero entries, then given a solution pair Z and W to BCOT, the matrix Q = Z diag(1/r)W⊤ is an approximation of the optimal transport plan that is a solution to problem (2) and whose rank is at most min(rank(Z), rank(W)). 1Proofs for the propositions are given in the appendix. Some recent works [16, 31] have suggested that this kind of low-rank regularization is preferable to entropic regularization as regards certain aspects. For example, the rank parameter is easier to select, since it has simple bounds (an integer between 1 and n). This may be contrasted with the regularization strength λ in the Sinkhorn algorithm, which is continuous. 2.3 Fuzzy Biclustering via Regularized Optimal Transport As previously mentioned, using entropic regularization may be interesting because of its various useful features including statistical and computational efficiency. However, another feature of entropic regularization is that the optimal couplings Z and W are dense matrices as a consequence of the structure of the optimal solution of entropically regularized OT problems. We formulate the problem as follows BCOTλ(w,v, r, c) ≜ min Z∈Π(w,r) W∈Π(v,c) 〈 L(B),ZW⊤ 〉 − λZH (Z)− λWH (W) (8) where λZ and λW are the regularization parameters. Fuzzy block seriation. We propose a fuzzy variant of the block seriation problem that allows us by extension to define a fuzzy variant for BCOT using entropic regularization. Let the fuzzy block seriation problem be defined as max Z∈Γs(n,k) W∈Γs(d,k) ∑ i,j,h bijzihwjh +Ω(Z,W) (9) where Ω(Z,W) is some regularization term introduced to make the partition matrices Z and W dense (for example, entropic regularization or low-rank constraints), and Γs(n, k) = {Z ∈ Rn×k+ |Z1 = 1} is the set of fuzzy partitions. Intuitively, for a solution pair (Z,W), up to a constant factor, each entry in the block seriation matrix C = ZW⊤ can be seen as the probability of its corresponding row and column belonging to the same bicluster i.e. cij = ziwj = ∑r h=1 zihwjh = p(bi,b ′ j) =∑r h=1 p(bi,b ′ j ∈ h). It is easy to see how problem (9) is related to problem (8) and that the couplings corresponding to solutions of the problem give the probability that the different rows and columns belong to the same biclusters. Figure 1c shows biclusters produced by the solutions of BCOTλ. Similarly to BCOT, a block diagonal structure is formed. However, there are also several off-block diagonal nonzero entries that represent the probabilities of the row-column pairs belonging to the same biclusters. 3 Links to Existing Work 3.1 Modularity Maximization in Bipartite Graphs [3]. This model is able to co-cluster binary and contingency matrices by directly maximizing an adapted version of the modularity measure traditionally used for networks. The criterion that it optimizes is max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h zihwjh ( bij − b.jbi. b.. ) . (10) By setting L(B) = −(B− 1b..B11 ⊤B), this problem becomes equivalent to ours; the difference is in the constraints on Z and W. 3.2 Modularity-Based Sparse Soft Graph Clustering [23]. Here the authors proposed a fuzzy variant of the above problem (although in the context of traditional clustering rather than biclustering). Solving the problem gives, for each element of the dataset, a probability of that element belonging to a given cluster. Our proposed entropic regularization variant represents a kind of extension of this problem to bipartite graphs. 3.3 Directional Co-clustering with a Conscience [30, 1]. This model makes use of the block von Mises-Fisher mixture model for co-clustering directional data on the unit-sphere. It optimizes the following criterion: max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h 1 √ z.hw.h zihwjhbij . (11) In our formulation, if we define L(B) = −B and apply cluster size normalization on the optimal transport plans Z̃ = Zdiag(Z⊤1)−1/2 and W̃ = Wdiag(W⊤1)−1/2 after computing Z and W respectively in algorithm 1, we obtain a more general version of the algorithm proposed by the authors for solving problem (11). 3.4 Bipartite Correlation Clustering [2]. In the case where the cost function results in a complete bipartite graph with ’+’ and ’-’ edges with a function L(B)ij = { −1 if bij > 0 +1 otherwise (12) we get what is known as Bipartite Correlation Clustering. The solution to this problem maximizes the number of agreements, i.e. the number of all ’+’ edges within clusters plus all ’-’ edges distributed across clusters. 4 Optimization and Complexity Optimization. Since the block seriation problem is NP-hard, computing an exact solution is prohibitive. An efficient and widely used heuristic for solving these kinds of problems involves the use of block coordinate descent, where row assignments are computed for fixed column assignments, and then vice versa, in alternation. We express the proposed algorithm in pseudo-code as algorithm 1. At each iteration we solve two intermediate optimal transport problems with cost matrices of dimensions n× k and d× k, since B is generally sparse, and L can be defined such that L(B) retains a similarly sparse structure. The computation of the intermediate cost matrices L(B)W and L(B)⊤Z is reasonably efficient. We also observed that the algorithm does not need many iterations to converge, as shown in figure 2, be it for BCOT or BCOTλ. Algorithm 1: BCOT Input :B bi-adjacency matrix, w and v row and column weights, r and c row and column exemplar distributions Output :πr, πc row and column partitions W←Winit; while not converged do Z← arg OT (L(B)W,w, r); W← arg OT ( L(B)⊤Z,v, c ) ; end Generate πr, πc from Z and W; Proposition 3 The computational complexity of the BCOT algorithm 1 when using an exact OT solver is O (tk∥B∥0 + tnk(n+ k) log(n+ k) + tdk(d+ k) log(d+ k))), and when using entropic regularization the complexity is O(tk∥B∥0 + tkn+ tkd), where t is the number of iterations. In table 1, we report the computational and spatial complexities of the different biclustering approaches. Our model has the same spatial complexity as the COOT variants and a better complexity than CCOT variants. As regards the computational complexity, our model should in most cases be faster with sparse data, and our experiments support this conjecture. For reproducibility, we publicly release our code 2. 5 Experiments We ran experiments using term-document matrices. The benefit of using biclustering on this kind of data is that the resulting biclusters contain both documents and the words that characterize them, which is helpful in interpreting the clustering of the documents. Additional experiments over synthetic and gene expression data are available in the appendix. 5.1 Datasets We evaluate BCOT in relation to six benchmark document-term datasets: ACM, DBLP, PubMed, Wiki, Ohscal, and 20 Newsgroups. Their characteristics are shown in Table 2. ACM, DBLP, Pubmed and Wiki are attributed networks from which we use only the node-level features that correspond to term-document matrices. We also selected the Ohscal collection and 20 Newsgroups as large-scale document-term matrices to serve as computational efficiency benchmarks. 5.2 Experimental Setup In our experiments we define the loss function as L(B) = −cB, where c is selected from {1, k, d, n}. For BCOTλ, the regularization parameter lambda is selected from {10−4, 10−3, 10−2, 10−1, 1, 10}. The best hyper-parameters are those that minimize the number of empty clusters. In the case of ties, we select according to the value of the Davies-Bouldin index of the partition [7]. Random restarts are not used for any of the algorithms, including k-means. We use the implementation provided by the authors for CCOT, CCOTλ and CCOT-GW. The code for CCOT was not available, and so we had to implement it based on the code for CCOT-GW. All the reported figures are the averages of 10 runs. 2https://github.com/chakib401/BCOT All the experiments were performed on the same machine with an Intel(R) Xeon(R) CPU and 12GB RAM. For OT solvers we made use of the POT package [15]. 5.3 Document Clustering Metrics. Here, the evaluation is straightforward, we adopt three popular clustering metrics: clustering accuracy (CA), normalized mutual information (NMI) [4], adjusted rand index (ARI) [24]. Performance. Document clustering results on ACM, DBLP, PubMed and Wiki are given in table 3 for the three metrics. In all cases the best result is obtained either by BCOT or by BCOTλ. Moreover, on Wiki, BCOTλ gives competitive results when compared with state-of-the-art attributed graph clustering methods presented in [14], despite not having access to the graph structure information in the Wiki citation network. Efficiency. Figure 3 plots the document clustering performance (accuracy against training time) of the different methods on the two large-scale document-term matrices 20 Newsgroup and Ohscal. BCOT offers the best accuracy while BCOTλ is fastest method on both datasets. We see that for both BCOT and COOT, the entropic-regularized versions outspeed their exact counterparts and that CCOT suffers from very high computation times, due mainly to the fact that this method requires pairwise distance matrices to be computed on the rows and columns. 5.4 Term Clustering Metrics. Unlike document clustering, there is no ground truth partition for terms, so we need to find another way of evaluating term clustering results. One generally acceptable technique is to analyse the semantic coherence of the clusters obtained. To this end we introduce a metric based on point mutual information (PMI). PMI is a frequently used information-theoretic metric for quantifying the relationship between pairs of discrete random variable outcomes. The PMI measure was chosen because prior research [29] has shown that it is closely associated with human judgements in determining word relatedness. The PMI between the terms wi and wj is calculated as PMI(wi, wj) = log p(wi, wj) p(wi)p(wj) (13) In the context of term clustering, given the word co-occurrence matrix K = B⊤B, the PMI is estimated as in PMI(wi, wj) = log k..kij ki.k.j (14) To evaluate a partition of terms P , we propose a metric based on intra and inter PMI metrics as follows: PMIintra(P ) = ∑ i∈P ∑ j∈P kij (15) PMIinter(P ) = ∑ i∈P ∑ j ̸∈P kij (16) In this way, a good clustering should reveal a high intra-cluster semantic relatedness, corresponding to higher PMI values. Using the intra and inter PMIs, we propose the following coherence index coherence(P) = 1∑ P∈P |P | ∑ P∈P |P | (PMIintra(P )− PMIinter(P )) . (17) Our reasoning is this: the greater the semantic proximity between terms in the same clusters, and the greater the sematic distance between terms in different clusters, the higher the value of coherence. Results. Since there is no ground truth number of term clusters, we use the cluster number estimations produced by CCOT-GW for all the other models so that it is easy to compare coherence values between them. Comparisons based on different numbers of clusters would favor the model using the larger number of clusters. Table 4 shows the coherences obtained across the different datasets using our approach, along with those of the baselines. It is clear that BCOT succeeds in capturing more semantics than the other approaches since, whatever the dataset, one or other of the two BCOT variants gives the highest coherence. 5.5 Statistical Significance We performed a Nemenyi post-hoc test [28, 8] with a confidence level of 90% on the document and term clustering results that we obtained, to determine whether our model outperforms other OT biclustering approaches in a statistically significant way. To conduct this test we generated 20 performance rankings of the OT biclustering models based on their performance for each dataset and quality metric pair for both document and term clustering. Figure 4 shows the results of the test. We see that two differently performing groups were identified, one comprising BCOT and BCOTλ and giving better results than the other group comprising the remaining COOT and CCOT variants. This indicates that with this specific number of datasets and metrics the test was unable to tell COOT and CCOT apart in a statistically significant way. 6 Conclusion Clustering and biclustering through optimal transport is still at a nascent stage, with many challenges remaining unsolved. This paper introduces a novel problem for biclustering using optimal transport that takes into account the sparse nature of certain types of dyadic data such document-term matrices, to enable more computationally efficient resolution. The problem is posed as a bilinear program that we solve using an efficient block coordinate descent algorithm to find a vertex solution. Experiments on a number of document-term datasets suggest that the proposed approach does a good job in finding clusters that correspond to ground truth document classes, while generating semantically coherent partitions for the terms. In this setting, our model outperforms recent OT biclustering methods by a significant margin, while being more computationally efficient. Acknowledgments and Disclosure of Funding This work has been funded by Informatique Caisse des Dépôts et Consignations (ICDC), Association Nationale de la Recherche et de la Technologie (ANRT), and Idex-Spectrans of Université Paris Cité.
1. What is the focus and contribution of the paper regarding biclustering using optimal transport? 2. What are the strengths and weaknesses of the proposed methods, particularly in terms of their computational efficiency and estimation accuracy? 3. Do you have any questions regarding the introduction, definitions, and constraints used in the paper? 4. What are the limitations of the proposed approach, especially when applying it to other data types? 5. Are there any concerns or suggestions regarding the experimental setup and results presented in the paper?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a generic framework for biclustering using optimal transport. Two methods are developed in this framework and usually result in an almost hard biclustering and a fuzzy biclustering accordingly. The computational efficiency and accuracy are validated through six benchmark datasets. Strengths And Weaknesses originality Strength This paper leverages the low-rank optimal transport to solve the biclustering problem for the dyadic data. quality The paper is generally well written in grammar. However, the technical discussion lacks clarity in some respects. Such as Page 1 line 27, the “summary matrix” is mentioned yet without further explanation. clarity Weaknesses (1) It is unclear what’s the major advantage makes the proposed methods perform better than the others in the experiments. (2) A lot of details of the experiment seem to be missing. Such as, how are the parameters r and c selected. There are lots of existing works introduced in section 3 seems can solve the discussed problem also and what they are not compared. significance This paper leverages the optimal transport to solve the biclustering problem for the dyadic data. The computational efficiency and estimation accuracy are achieved through the low-rank assumption of the solution matrix. Questions The introduction of the relevant is unclear and would need more details and formalization. I fail to grasp exactly what causes the weakness of the these introduced methods CCOT, CCOT-GW, COOT and COOT-GW. When the authors say integrating the constraint r a n k ( C ) ≤ k to (4) and finally get (5), are the three constraints (binarity, assignment, impossible triads) included or ignored? The definition of anti-adjacency matrix seems a little arbitrary since it is defined based on the “discrepancy” between two nodes, which seems to be a concept without any rigorous mathematical statement in the paper. Page 3 line 95. It seems that the dimension of r and c and be different. How can that be achieved considering Z should be a matrix with n row r columns and W should be a matrix with d row and c columns. It would be better to replace the notation for the assignment matrix since it is duplicated with the solution matrix C. This causes some confusions during the reading. Page 5, is there a connection between (8) and (9) or they are two different objective functions? Page 7. The time complexity of CCOT reported in the paper “Coclustering through optimal transport” seems do not match with the order in the Table 1. What causes the extra computational burden in this paper? In the implementation, it seems the algorithm needs r and c as input. The details about how to choose these two parameters and decide the rank k seems to be missing. In the experiment, the repeat times seems to be small (only 10 runs are conducted). In some settings, the standard deviation of the results is 0. How does that come? Limitations The authors mention about the limitation that the method is specifically tailored to datasets consisting in dyadic data for biclustering and can not be applied on other data types such as images directly.
NIPS
Title Efficient and Effective Optimal Transport-Based Biclustering Abstract Bipartite graphs can be used to model a wide variety of dyadic information such as user-rating, document-term, and gene-disorder pairs. Biclustering is an extension of clustering to the underlying bipartite graph induced from this kind of data. In this paper, we leverage optimal transport (OT) which has gained momentum in the machine learning community to propose a novel and scalable biclustering model that generalizes several classical biclustering approaches. We perform extensive experimentation to show the validity of our approach compared to other OT biclustering algorithms along both dimensions of the dyadic datasets. 1 Introduction Let G = (U, V,E) be a bipartite graph, which is a graph whose vertices can be divided into two disjoint sets U = {1, 2, . . . , |U |} with |U | = n, V = {1, 2, . . . , |V |} with |V | = d and the set of edges E where each edge connects a vertex of U to a vertex of V . The adjacency matrix for this type of graph has the following structure A = ( 0n×n B B⊤ 0d×d ) (1) where B of size n× d is called the biadjacency matrix of G, its rows and columns corresponding to the two sets of vertices; each entry represents an edge between a row and a column. Biclustering (or Co-clustering) is the extention of clustering to this type of graph. Following [21], several biclustering models have attempted to solve the problem by viewing B as a two-mode matrix and searching for a simultaneous partition of its rows and columns [9]. In this way, biclustering seeks to reveal subsets of U which exhibit a similar behaviour across a subset of V in matrix B. Biclustering has been used in a number of different contexts. [12] used microarray data to find relations between genes and conditions, finding that genes with similar functions often cluster together. [20] applied this paradigm to data from the US Food and Drug Administration reporting system in order to identify groups of drugs with adverse effects. [11] used it to find market segments among tourists so as to enable more effective targeted marketing. There have been various other applications [9, 33, 19]. Several solutions to the biclustering problem have been proposed in the literature (see [17]). [10] used an information-theoretic approach to solve the problem by minimizing the difference in mutual 36th Conference on Neural Information Processing Systems (NeurIPS 2022). information between B and a summary matrix; they implicitly assume that the data points are generated from a Poisson latent block model [18]. [3] adapted classical modularity to bipartite networks and then used it to identify modules within them. [35] proposed a biclustering paradigm based on nonnegative matrix tri-factorization of the biadjacency matrix. Recently, Optimal Transport (OT) has taken the machine learning community by storm. OT has helped to solve a variety of data mining problems, and biclustering is no exception. [25] proposed two models for biclustering: a first model, CCOT, which does co-clustering based on the scaling vectors obtained by applying the Sinkhorn-Knopp algorithm on a square subsampled version of matrix B, and a second model, CCOT-GW, which uses scaling vectors obtained by computing entropic Gromov-Wasserstein barycenters, and which does not require subsampling. Then came [34], where the authors did biclustering by minimizing a new metric, COOT, which generalizes the Gromov-Wasserstein distance between B and a summary matrix, similarly to what was done in [10]. More specifically, they proposed two new metrics: COOT, together with an entropically regularized metric COOTλ. However, both [25] and [34] have certain drawbacks. First, both algorithms do not tackle the biclustering from the beginning; the co-clusters are deduced at the convergence. Thereby biclustering is a consequence and not a main goal. Secondly, they suffer from high computational complexity; CCOT and CCOT-GW also consume large amounts of memory. Finally, we will see that these algorithms are not suited to dyadic sparse data. In this paper, while integrating the biclustering objective from the beginning, we propose a generic framework for biclustering through optimal transport, which generalizes some previous biclustering approaches. We propose two efficient methods for solving this problem: one that gives an almost hard biclustering, and another that gives a fuzzy or soft biclustering through entropic regularization. These methods outperform other optimal transport biclustering models, in terms of both document and term clustering, on several regular and large scale datasets, while being more computationally and memory efficient. We emphasize once again that the approach we propose is specifically tailored to datasets consisting of dyadic data. 2 Methodology Notations. In what follows, ∆n = {p ∈ Rn+| ∑n i=1 pi = 1} denotes the n-dimensional standard simplex. Π(w,v) = {Z ∈ Rn×k+ |Z1 = w,Z⊤1 = v} denotes the transportation polytope, where w ∈ ∆n and v ∈ ∆k are the marginals of the joint distribution Z and 1n is a vector of ones. Matrices are denoted with uppercase boldface letters, and vectors with lowercase boldface letters. For a matrix M, its i-th row is mi and its j-th column is m′j We have that ∥.∥0 is the 0-norm which returns the number of nonzero elements of its argument. 2.1 Preliminaries We first need to introduce exact discrete OT and its entropically regularized counterpart, and show how biclustering can be posed as an integer program. Discrete OT as a linear program. The goal of discrete optimal transport is to find a minimal cost transport plan between a source probability distribution w and a target distribution v. Here we are interested in the discrete case of the Kantorovich formulation of OT, that is OT(M,w,v) ≜ min Z∈Π(w,v) ⟨M,Z⟩ (2) where M ∈ Rn×k is the cost matrix, and mij quantifies the effort needed to transport a probability mass from wi to vj . Discrete entropy regularized OT. It has been suggested in the literature [6, 5] that the use of a regularization such as entropic regularization can lead to better computational and statistical efficiency. OTλ(M,w,v) ≜ min Z∈Π(w,v) ⟨M,Z⟩ − λH (Z) (3) where H is the entropy defined as H(Z) ≜ − ∑ i,j zij log zij and λ controls the strength of regularization. The computational efficiency comes from the fact that the unique solution of this problem is of the structure Z := diag(a) exp(−M/λ)diag(b), a rescaled elementwise negative exponential of the cost M, where a and b are scaling vectors. These vectors can be found efficiently using the Sinkhorn-Knopp algorithm. Biclustering as an integer program. The Block seriation problem [27] consists in finding two permutation matrices, one for the rows and one for the columns s.t. dense blocks appear along the diagonal of the permuted matrix. A possible definition of the block seriation problem is as follows: given a matrix B ∈ Rn×d s.t bij gives the strength of the association between row i and column j (such as in the case of a biadjacency matrix, for example), we have max C ∑ i,j bijcij (4) subject to ∀ i, j cij ∈ {0, 1} ∀ j ∑ i cij ≥ 1 ∀ i ∑ j cij ≥ 1 ∀ i, j, i′, j′ cij + cij′ + ci′j′ − ci′j ≤ 2 ci′j′ + ci′j + cij − cij′ ≤ 2 ci′j + cij + cij′ − ci′j′ ≤ 2 cij′ + ci′j′ + ci′j − cij ≤ 2 A solution C is a block diagonal matrix up to a permutation of its rows and columns. The block seriation problem is an integer programming problem that is NP-hard. One approach for solving this problem uses a simplified version where a rank constraint rank(C) ≤ k is added for k the number of desired biclusters. Integrating this constraint into (4), we can define a new problem by low-rank factorization of C, i.e. C = ZW⊤, which we formulate as max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h bijzihwjh (5) where Γ(n, k) = {Z ∈ {0, 1}n×k | Z1 = 1} is the set of hard partitions of dimension n × k. A simple heuristic for solving this problem involves alternatingly solving for Z given W, and vice-versa, using classical clustering algorithms, before identifying biclusters through the rearranged matrix C, which displays a block diagonal structure, as shown in figure 1a. The biclusters are identified by grouping together the rows and columns that form a block along the diagonal. 2.2 Biclustering using Optimal Transport Here we propose a new biclustering problem based on block seriation and optimal transport. For this purpose we first define what we term an anti-adjacency matrix. Note that a similar concept has been discussed in [36]. Definition 1 (Anti-adjacency matrix) Given a graph characterized by an adjacency matrix A, we have a corresponding anti-adjacency matrix A s.t. aij quantifies the discrepancy between nodes i and j. We consider a bipartite graph characterized by its biadjacency matrix B = (bij) ∈ Rn×d. The rows of B are endowed with weights w ∈ ∆n and its columns with weights v ∈ ∆d. We also consider a row exemplar distribution r ∈ ∆r and a column exemplar distribution c ∈ ∆c. Depending on the availability of a priori information about the data, these weight vectors can be set to uniform distributions. Now let its anti-biadjacency matrix be B = L(B), where L : Rn×d → Rn×d means that bij , the association between node i and node j, is transformed into a discrepancy measure L(B)ij . Thus, we define the optimal transport block seriation problem as the following bilinear program BCOT(w,v, r, c) ≜ min Z∈Π(w,r) W∈Π(v,c) ∑ i,j,k L(B)ijzikwjk ≡ min Z∈Π(w,r) W∈Π(v,c) 〈 L(B),ZW⊤ 〉 (6) where Z is a transport plan (or coupling) between between the row distribution w and the row exemplar distribution r, and similarly for W w.r.t. the column distribution v and the column exemplar distribution c. Inducing a biclustering via BCOT. We will now show how to obtain a partition of the rows and the columns given a solution pair (Z,W). In what follows our aim is to identify an almost-hard clustering couple for rows and columns from the couplings Z and W. Definition 2 (h-almost hard clustering) We define an h-almost hard clustering as a clustering whose assignment matrix is C ∈ Rn×k s.t. ∥C∥0 = n + h and for each row c of C we have that ∥c∥0 > 0. When h = 0, we obtain a standard hard clustering with one non-zero element per row. Proposition 1 1 For w, v, r and c containing no zeros, there exists an optimal pair of coupling matrices Z and W that are h-almost hard clusterings with h ∈ {0, . . . , k − 1}. Furthermore, when n = k (resp. d = k) and w = r (resp. v = c), this Z (resp. W) becomes a hard clustering, i.e., Z ∈ Γ(n, n) (resp. W ∈ Γ(d, d)). This means that the solutions are already almost a hard partition of the data, since k << n, d. To obtain a final hard clustering in the strict sense, we assign each row (resp. column) to the one corresponding to the row of Z (resp. W) with the largest value. This should not significantly change the structure of the solution. Figure 1b provides an illustration: here we see the block diagonal structure generated by the product of the two coupling matrices C = ZW⊤, with its similarity in appearance to the biclustering produced by the hard block seriation 1a, apart from a few nonzero entries off the block diagonal that are hard to see immediately. Intuition for BCOT. To explain the intuition behind the proposed approach we need to look at how the problem is solved. The optimization procedure as described in algorithm 1 consists in alternating between the computation of an optimal transport plan Z given W and vice versa. As regards solving for Z given W, the problem can be rewritten as BCOT(w,v, r, c) ≡ min Z∈Π(w,r) ⟨L(B)W,Z⟩ . (7) This is an optimal transport problem with L(B)W as the cost matrix. The resulting transport plan Z can be seen as a kind of row cluster assignment matrix: if zih > 0, then row i is assigned to cluster h. The same holds for W, which can be seen as a column cluster assignment matrix. This also means that since L(B) is the dissimilarity between the rows and the columns, then the cost matrix L(B)W represents the dissimilarity between rows and row exemplars (or representatives or centroids). In particular, L(B)iwh is the dissimilarity or cost of probability mass transportation between row i and row cluster exemplar h. The reasoning is the same for the columns and the optimal coupling W. Low-rank optimal transport. Biclustering is the main purpose of the approach we proposed, but there is another interesting use case. Proposition 2 For equal target row and column representative distributions, i.e., r = c, and containing no zero entries, then given a solution pair Z and W to BCOT, the matrix Q = Z diag(1/r)W⊤ is an approximation of the optimal transport plan that is a solution to problem (2) and whose rank is at most min(rank(Z), rank(W)). 1Proofs for the propositions are given in the appendix. Some recent works [16, 31] have suggested that this kind of low-rank regularization is preferable to entropic regularization as regards certain aspects. For example, the rank parameter is easier to select, since it has simple bounds (an integer between 1 and n). This may be contrasted with the regularization strength λ in the Sinkhorn algorithm, which is continuous. 2.3 Fuzzy Biclustering via Regularized Optimal Transport As previously mentioned, using entropic regularization may be interesting because of its various useful features including statistical and computational efficiency. However, another feature of entropic regularization is that the optimal couplings Z and W are dense matrices as a consequence of the structure of the optimal solution of entropically regularized OT problems. We formulate the problem as follows BCOTλ(w,v, r, c) ≜ min Z∈Π(w,r) W∈Π(v,c) 〈 L(B),ZW⊤ 〉 − λZH (Z)− λWH (W) (8) where λZ and λW are the regularization parameters. Fuzzy block seriation. We propose a fuzzy variant of the block seriation problem that allows us by extension to define a fuzzy variant for BCOT using entropic regularization. Let the fuzzy block seriation problem be defined as max Z∈Γs(n,k) W∈Γs(d,k) ∑ i,j,h bijzihwjh +Ω(Z,W) (9) where Ω(Z,W) is some regularization term introduced to make the partition matrices Z and W dense (for example, entropic regularization or low-rank constraints), and Γs(n, k) = {Z ∈ Rn×k+ |Z1 = 1} is the set of fuzzy partitions. Intuitively, for a solution pair (Z,W), up to a constant factor, each entry in the block seriation matrix C = ZW⊤ can be seen as the probability of its corresponding row and column belonging to the same bicluster i.e. cij = ziwj = ∑r h=1 zihwjh = p(bi,b ′ j) =∑r h=1 p(bi,b ′ j ∈ h). It is easy to see how problem (9) is related to problem (8) and that the couplings corresponding to solutions of the problem give the probability that the different rows and columns belong to the same biclusters. Figure 1c shows biclusters produced by the solutions of BCOTλ. Similarly to BCOT, a block diagonal structure is formed. However, there are also several off-block diagonal nonzero entries that represent the probabilities of the row-column pairs belonging to the same biclusters. 3 Links to Existing Work 3.1 Modularity Maximization in Bipartite Graphs [3]. This model is able to co-cluster binary and contingency matrices by directly maximizing an adapted version of the modularity measure traditionally used for networks. The criterion that it optimizes is max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h zihwjh ( bij − b.jbi. b.. ) . (10) By setting L(B) = −(B− 1b..B11 ⊤B), this problem becomes equivalent to ours; the difference is in the constraints on Z and W. 3.2 Modularity-Based Sparse Soft Graph Clustering [23]. Here the authors proposed a fuzzy variant of the above problem (although in the context of traditional clustering rather than biclustering). Solving the problem gives, for each element of the dataset, a probability of that element belonging to a given cluster. Our proposed entropic regularization variant represents a kind of extension of this problem to bipartite graphs. 3.3 Directional Co-clustering with a Conscience [30, 1]. This model makes use of the block von Mises-Fisher mixture model for co-clustering directional data on the unit-sphere. It optimizes the following criterion: max Z∈Γ(n,k) W∈Γ(d,k) ∑ i,j,h 1 √ z.hw.h zihwjhbij . (11) In our formulation, if we define L(B) = −B and apply cluster size normalization on the optimal transport plans Z̃ = Zdiag(Z⊤1)−1/2 and W̃ = Wdiag(W⊤1)−1/2 after computing Z and W respectively in algorithm 1, we obtain a more general version of the algorithm proposed by the authors for solving problem (11). 3.4 Bipartite Correlation Clustering [2]. In the case where the cost function results in a complete bipartite graph with ’+’ and ’-’ edges with a function L(B)ij = { −1 if bij > 0 +1 otherwise (12) we get what is known as Bipartite Correlation Clustering. The solution to this problem maximizes the number of agreements, i.e. the number of all ’+’ edges within clusters plus all ’-’ edges distributed across clusters. 4 Optimization and Complexity Optimization. Since the block seriation problem is NP-hard, computing an exact solution is prohibitive. An efficient and widely used heuristic for solving these kinds of problems involves the use of block coordinate descent, where row assignments are computed for fixed column assignments, and then vice versa, in alternation. We express the proposed algorithm in pseudo-code as algorithm 1. At each iteration we solve two intermediate optimal transport problems with cost matrices of dimensions n× k and d× k, since B is generally sparse, and L can be defined such that L(B) retains a similarly sparse structure. The computation of the intermediate cost matrices L(B)W and L(B)⊤Z is reasonably efficient. We also observed that the algorithm does not need many iterations to converge, as shown in figure 2, be it for BCOT or BCOTλ. Algorithm 1: BCOT Input :B bi-adjacency matrix, w and v row and column weights, r and c row and column exemplar distributions Output :πr, πc row and column partitions W←Winit; while not converged do Z← arg OT (L(B)W,w, r); W← arg OT ( L(B)⊤Z,v, c ) ; end Generate πr, πc from Z and W; Proposition 3 The computational complexity of the BCOT algorithm 1 when using an exact OT solver is O (tk∥B∥0 + tnk(n+ k) log(n+ k) + tdk(d+ k) log(d+ k))), and when using entropic regularization the complexity is O(tk∥B∥0 + tkn+ tkd), where t is the number of iterations. In table 1, we report the computational and spatial complexities of the different biclustering approaches. Our model has the same spatial complexity as the COOT variants and a better complexity than CCOT variants. As regards the computational complexity, our model should in most cases be faster with sparse data, and our experiments support this conjecture. For reproducibility, we publicly release our code 2. 5 Experiments We ran experiments using term-document matrices. The benefit of using biclustering on this kind of data is that the resulting biclusters contain both documents and the words that characterize them, which is helpful in interpreting the clustering of the documents. Additional experiments over synthetic and gene expression data are available in the appendix. 5.1 Datasets We evaluate BCOT in relation to six benchmark document-term datasets: ACM, DBLP, PubMed, Wiki, Ohscal, and 20 Newsgroups. Their characteristics are shown in Table 2. ACM, DBLP, Pubmed and Wiki are attributed networks from which we use only the node-level features that correspond to term-document matrices. We also selected the Ohscal collection and 20 Newsgroups as large-scale document-term matrices to serve as computational efficiency benchmarks. 5.2 Experimental Setup In our experiments we define the loss function as L(B) = −cB, where c is selected from {1, k, d, n}. For BCOTλ, the regularization parameter lambda is selected from {10−4, 10−3, 10−2, 10−1, 1, 10}. The best hyper-parameters are those that minimize the number of empty clusters. In the case of ties, we select according to the value of the Davies-Bouldin index of the partition [7]. Random restarts are not used for any of the algorithms, including k-means. We use the implementation provided by the authors for CCOT, CCOTλ and CCOT-GW. The code for CCOT was not available, and so we had to implement it based on the code for CCOT-GW. All the reported figures are the averages of 10 runs. 2https://github.com/chakib401/BCOT All the experiments were performed on the same machine with an Intel(R) Xeon(R) CPU and 12GB RAM. For OT solvers we made use of the POT package [15]. 5.3 Document Clustering Metrics. Here, the evaluation is straightforward, we adopt three popular clustering metrics: clustering accuracy (CA), normalized mutual information (NMI) [4], adjusted rand index (ARI) [24]. Performance. Document clustering results on ACM, DBLP, PubMed and Wiki are given in table 3 for the three metrics. In all cases the best result is obtained either by BCOT or by BCOTλ. Moreover, on Wiki, BCOTλ gives competitive results when compared with state-of-the-art attributed graph clustering methods presented in [14], despite not having access to the graph structure information in the Wiki citation network. Efficiency. Figure 3 plots the document clustering performance (accuracy against training time) of the different methods on the two large-scale document-term matrices 20 Newsgroup and Ohscal. BCOT offers the best accuracy while BCOTλ is fastest method on both datasets. We see that for both BCOT and COOT, the entropic-regularized versions outspeed their exact counterparts and that CCOT suffers from very high computation times, due mainly to the fact that this method requires pairwise distance matrices to be computed on the rows and columns. 5.4 Term Clustering Metrics. Unlike document clustering, there is no ground truth partition for terms, so we need to find another way of evaluating term clustering results. One generally acceptable technique is to analyse the semantic coherence of the clusters obtained. To this end we introduce a metric based on point mutual information (PMI). PMI is a frequently used information-theoretic metric for quantifying the relationship between pairs of discrete random variable outcomes. The PMI measure was chosen because prior research [29] has shown that it is closely associated with human judgements in determining word relatedness. The PMI between the terms wi and wj is calculated as PMI(wi, wj) = log p(wi, wj) p(wi)p(wj) (13) In the context of term clustering, given the word co-occurrence matrix K = B⊤B, the PMI is estimated as in PMI(wi, wj) = log k..kij ki.k.j (14) To evaluate a partition of terms P , we propose a metric based on intra and inter PMI metrics as follows: PMIintra(P ) = ∑ i∈P ∑ j∈P kij (15) PMIinter(P ) = ∑ i∈P ∑ j ̸∈P kij (16) In this way, a good clustering should reveal a high intra-cluster semantic relatedness, corresponding to higher PMI values. Using the intra and inter PMIs, we propose the following coherence index coherence(P) = 1∑ P∈P |P | ∑ P∈P |P | (PMIintra(P )− PMIinter(P )) . (17) Our reasoning is this: the greater the semantic proximity between terms in the same clusters, and the greater the sematic distance between terms in different clusters, the higher the value of coherence. Results. Since there is no ground truth number of term clusters, we use the cluster number estimations produced by CCOT-GW for all the other models so that it is easy to compare coherence values between them. Comparisons based on different numbers of clusters would favor the model using the larger number of clusters. Table 4 shows the coherences obtained across the different datasets using our approach, along with those of the baselines. It is clear that BCOT succeeds in capturing more semantics than the other approaches since, whatever the dataset, one or other of the two BCOT variants gives the highest coherence. 5.5 Statistical Significance We performed a Nemenyi post-hoc test [28, 8] with a confidence level of 90% on the document and term clustering results that we obtained, to determine whether our model outperforms other OT biclustering approaches in a statistically significant way. To conduct this test we generated 20 performance rankings of the OT biclustering models based on their performance for each dataset and quality metric pair for both document and term clustering. Figure 4 shows the results of the test. We see that two differently performing groups were identified, one comprising BCOT and BCOTλ and giving better results than the other group comprising the remaining COOT and CCOT variants. This indicates that with this specific number of datasets and metrics the test was unable to tell COOT and CCOT apart in a statistically significant way. 6 Conclusion Clustering and biclustering through optimal transport is still at a nascent stage, with many challenges remaining unsolved. This paper introduces a novel problem for biclustering using optimal transport that takes into account the sparse nature of certain types of dyadic data such document-term matrices, to enable more computationally efficient resolution. The problem is posed as a bilinear program that we solve using an efficient block coordinate descent algorithm to find a vertex solution. Experiments on a number of document-term datasets suggest that the proposed approach does a good job in finding clusters that correspond to ground truth document classes, while generating semantically coherent partitions for the terms. In this setting, our model outperforms recent OT biclustering methods by a significant margin, while being more computationally efficient. Acknowledgments and Disclosure of Funding This work has been funded by Informatique Caisse des Dépôts et Consignations (ICDC), Association Nationale de la Recherche et de la Technologie (ANRT), and Idex-Spectrans of Université Paris Cité.
1. What is the focus and contribution of the paper on bi-clustering? 2. What are the strengths of the proposed approach, particularly in its generalizability and efficiency? 3. What are the weaknesses of the paper, especially regarding external metrics and hyperparameter settings? 4. Do you have any questions regarding the relationship between L in COOT and BICOT or the choice of the function L? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a new approach to bi-clustering, BICOT, and its entropic regularization BICOT_{\lambda}. The proposed BICOT model is quite general. In primis, it can be reduced to a low-rank solution for optimal transport under specific distribution conditions. Secondly, it can be reduced to several bi-clustering models in the literature. The authors also prove that the computational complexity of BICOT is the same as that of previous Co-Clustering work COOT [26] and inferior to the versions of CCOT [19]. To support their claims, parallel to the theoretical demonstrations, the authors conduct an accurate analysis of their algorithm by comparing their method with state of the art for different dyadic data sets with increasing size. In particular, besides better accuracy, adjusted random index and normalized mutual information, BICOT_{\lambda} performs best in all the experiments. From a theoretical point of view, this work introduces the problem in a more general and straightforward form that can be traced back through appropriate choices to several models previously introduced in the literature. The main idea is to formulate the bi-clustering problem by using two building blocks: block seriation and optimal transport. Strengths And Weaknesses The strength of this work is mainly based on three aspects: 1. The proposed approach generalizes many already present algorithms that can be obtained as special cases. 2. The presented algorithm is more efficient both in terms of complexity and in terms of memory usage. 3. The authors provide evidence of the effectiveness of their approach both theoretically and experimentally. The paper conducts extended analysis over several data-set showing great performance compared to other existing approaches. The results are well exposed, and the paper is well written and easy to follow. A possible weakness is that determining the number of clusters requires resorting to external metrics. Furthermore, it is unclear if the authors set the hyper-parameters by performing cross-validation on a data set independently from the validation set. Also, there is no reference to the hyper-parameters used in the methods that serve as the baseline. Finally, concerning the term clustering experiment, it seems that the best performances are due to the fact that some methods have not been optimized. Questions In (9) L is not used; it is not well explained in which sense (9) is related to (8). In [26] is proved that COOT is a distance. Some discussion comparing L in COOT and BICOT would be appreciated. Comparisons with [26] are done on documents and terms, and in the paper it is said that it would not be possible on images. On the other hand, COOT has shown experiments on MNIST and USPS (see Figure 1 in [26] (on Neurips)), simply normalizing pixel magnitude to [0,1]. Computational complexity of [26] is O(min{(n + n’)dd’ + n’^2n; (d + d’)nn’ + d’^2d}) The performance of the proposed approach strongly relies on the choice of the function L . How much will the results change when changing L ? Is there a way to define an optimal L for a given quality measure? How the algorithm asymptotic complexity will change if one uses a dense function L ? How much do the results depend on the regularization function Ω in B C O T λ ? How have been chosen the hyper-parameters of k-means clustering? Typos: -Line 99: L(b)_{ij} remove Z -Line 101: two times “between” (12) L(b_{ij}) Line 191: in --> is Line 194 is available -Line 280: till—>still Proof of proposition 3: Line 445 twice L(B)W --> L(B)W and L(B)^TZ Limitations The limitations of the proposed approach are not fully explained. For example, it seems that the asymptotic low cost of the algorithm is largely due to the sparseness of the cost matrix. Although one has the freedom to choose this matrix, for some applications a sparse matrix may not give satisfactory results. However, the use of dense matrices could considerably slow down the algorithm making it de facto uncompetitive.
NIPS
Title On preserving non-discrimination when combining expert advice Abstract We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider the most basic extension of classical online learning: Given a class of predictors that are individually non-discriminatory with respect to a particular metric, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? Surprisingly we show that this task is unachievable for the prevalent notion of equalized odds that requires equal false negative rates and equal false positive rates across groups. On the positive side, for another notion of non-discrimination, equalized error rates, we show that running separate instances of the classical multiplicative weights algorithm for each group achieves this guarantee. Interestingly, even for this notion, we show that algorithms with stronger performance guarantees than multiplicative weights cannot preserve non-discrimination. 1 Introduction The emergence of machine learning in the last decade has given rise to an important debate regarding the ethical and societal responsibility of its offspring. Machine learning has provided a universal toolbox enhancing the decision making in many disciplines from advertising and recommender systems to education and criminal justice. Unfortunately, both the data and their processing can be biased against specific population groups (even inadvertently) in every single step of the process [4]. This has generated societal and policy interest in understanding the sources of this discrimination and interdisciplinary research has attempted to mitigate its shortcomings. Discrimination is commonly an issue in applications where decisions need to be made sequentially. The most prominent such application is online advertising where platforms need to sequentially select which ad to display in response to particular query searches. This process can introduce discrimination against protected groups in many ways such as filtering particular alternatives [12, 2] and reinforcing existing stereotypes through search results [38, 25]. Another canonical example of sequential decision making is medical trials where underexploration on female groups often leads to significantly worse treatments for them [31]. Similar issues occur in image classification as stressed by “gender shades” [7]. The reverse (overexploration in minority populations) can also cause concerns especially if conducted in a non-transparent fashion [5]. In these sequential settings, the assumption that data are i.i.d. is often violated. Online advertising, recommender systems, medical trials, image classification, loan decisions, criminal recidivism all require decisions to be made sequentially. The corresponding labels are not identical across time and can be affected by the economy, recent events, etc. Similarly labels are also not independent across rounds – if a bank offers a loan then this decision can affect whether the loanee or their environment will be able to repay future loans thereby affecting future labels as discussed by Liu et al. [32]. As a result, it is important to understand the effect of this adaptivity on non-discrimination. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The classical way to model settings that are not i.i.d. is via adversarial online learning [30, 17], which poses the question: Given a class F of predictors, how can we make online predictions that perform as well as the best predictor from F in hindsight? The most basic online learning question (answered via the celebrated “multiplicative weights” algorithm) concerns competing with a finite set of predictors. The class F is typically referred to as “experts” and can be thought as “features” of the example where we want to make online predictions that compete with the best 1-sparse predictor. In this work, we wish to understand the interplay between adaptivity and non-discrimination and therefore consider the most basic extension of the classical online learning question: Given a class of individually non-discriminatory predictors, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? The assumption that predictors are individually non-discriminatory is a strong assumption on the predictors and makes the task trivial in the batch setting where the algorithm is given labeled examples and wishes to perform well on unseen examples drawn from the same distribution. This happens because the algorithm can learn the best predictor from the labeled examples and then follow it (since this predictor is individually non-discriminatory, the algorithm does not exhibit discrimination). This enables us to understand the potential overhead that adaptivity is causing and significantly strengthens any impossibility result. Moreover, we can assume that predictors have been individually vetted to satisfy the non-discrimination desiderata – we therefore wish to understand how to efficiently compose these non-discriminatory predictors while preserving non-discrimination. 1.1 Our contribution Our impossibility results for equalized odds. Surprisingly, we show that for a prevalent notion of non-discrimination, equalized odds, it is impossible to preserve non-discrimination while also competing comparably to the best predictor in hindsight (no-regret property). Equalized odds, suggested by Hardt et al. [20] in the batch setting, restricts the set of allowed predictors requiring that, when examples come from different groups, the prediction is independent to the group conditioned on the label. In binary classification, this means that the false negative rate (fraction of positive examples predicted negative) is equal across groups and the same holds for the false positive rate (defined analogously). This notion was popularized by a recent debate on potential bias of machine learning risk tools for criminal recividism [1, 10, 28, 16]. Our impossibility results demonstrate that the order in which examples arrive significantly complicates the task of achieving desired efficiency while preserving non-discrimination with respect to equalized odds. In particular, we show that any algorithm agnostic to the group identity either cannot achieve performance comparable to the best predictor or exhibits discrimination in some instances (Theorem 1). This occurs in phenomenally simple settings with only two individually non-discriminatory predictors, two groups, and perfectly balanced instances: groups are of equal size and each receives equal number of positive and negative labels. The only imbalance exists in the order in which the labels arrive which is also relatively well behaved – labels are generated from two i.i.d. distributions, one in the first half of the instance and one in the second half. Although in many settings we cannot actively use the group identity of the examples due to legal reasons (e.g., in hiring), one may wonder whether these impossibility results disappear if we can actively use the group information to compensate for past mistakes. We show that this is also not the case (Theorem 2). Although our groups are not perfectly balanced, the construction is again very simple and consists only of two groups and two predictors: one always predicting positive and one always predicting negative. The simplicity of the settings, combined with the very strong assumption on the predictors being individually non-discriminatory speaks to the trade-off between adaptivity and non-discrimination with respect to equalized odds. Our results for equalized error rates. The strong impossibility results with respect to equalized odds invite the natural question of whether there exists some alternative fairness notion that, given access to non-discriminatory predictors, achieves efficiency while preserving non-discrimination. We answer the above positively by suggesting the notion of equalized error rates, which requires that the average expected loss (regardless whether it stems from false positives or false negatives) encountered by each group should be the same. This notion makes sense in settings where performance and fairness are measured with respect to the same objective. Consider a medical application where people from different subpopulations wish to receive appropriate treatment and any error in treatment costs equally both towards performance and towards fairness.1 It is morally objectionable to discriminate against one group, e.g. based on race, using it as experimentation to enhance the quality of service of the other, and it is reasonable to require that all subpopulations receive same quality of service. For this notion, we show that, if all predictors are individually non-discriminatory with respect to equalized error rates, running separate multiplicative weights algorithms, one for each subpopulation, preserves this non-discrimination without decay in the efficiency (Theorem 3). The key property we use is that the multiplicative weights algorithm guarantees to perform not only no worse than the best predictor in hindsight but also no better; this property holds for a broader class of algorithms [14]. Our result applies to general loss functions beyond binary predictions and only requires predictors to satisfy the weakened assumption of being approximately non-discriminatory. Finally, we examine whether the decisions of running separate algorithms and running this particular not so efficient algorithm were important for the result. We first give evidence that running separate algorithms is essential for the result; if we run a single instance of “multiplicative weights” or “follow the perturbed leader”, we cannot guarantee non-discrimination with respect to equalized error rates (Theorem 4). We then suggest that the property of not performing better than the best predictor is also crucial; in particular, better algorithms that satisfy the stronger guarantee of low shifting regret [21, 6, 34] are also not able to guarantee this non-discrimination (Theorem 5). These algorithms are considered superior to classical no-regret algorithms as they can better adapt to changes in the environment, which has nice implications in game-theoretic settings [35]. Our latter impossibility result is a first application where having these strong guarantees against changing benchmarks is not necessarily desired and therefore is of independent learning-theoretic interest. 1.2 Related work There is a large line of work on fairness and non-discrimination in machine learning (see [36, 8, 13, 41, 22, 20, 10, 28, 26] for a non-exhaustive list). We elaborate on works that either study group notions of fairness or fairness in online learning. The last decade has seen a lot of work on group notions of fairness, mostly in classification setting. Examples include notions that compare the percentage of members predicted positive such as demographic parity [8], disparate impact [15], equalized odds [20] and calibration across groups [10, 28]. There is no consensus on a universal fairness notion; rather the specific notion considered is largely task-specific. In fact, previous works identified that these notions are often not compatible to each other [10, 28], posed concerns that they may introduce unintentional discrimination [11], and suggested the need to go beyond such observational criteria via causal reasoning [27, 29]. Prior to our work, group fairness notions have been studied primarily in the batch learning setting with the goal of optimizing a loss function subject to a fairness constraint either in a post-hoc correction framework as proposed by Hardt et al. [20] or more directly during training from batch data [41, 19, 39, 40, 3] which requires care due to the predictors being discriminatory with respect to the particular metric of interest. The setting we focus on in this paper does not have the challenges of the above since all predictors are non-discriminatory; however, we obtain surprising impossibility results due to the ordering in which labels arrive. Recently fairness in online learning has also started receiving attention. One line of work focuses on imposing a particular fairness guarantee at all times for bandits and contextual bandits, either for individual fairness [22, 23] or for group fairness [9]. Another line of work points to counterintuitive externalities of using contextual bandit algorithms agnostic to the group identity and suggest that heterogeneity in data can replace the need for exploration [37, 24]. Moreover, following a seminal paper by Dwork et al. [13], a line of work aims to treat similar people similarly in online settings [33, 18]. Our work distinguishes itself from these directions mainly in the objective, since we require the non-discrimination to happen in the long-term instead of at any time; this extends the classical batch definitions of non-discrimination in the online setting. In particular, we only focus on situations where there are enough samples from each population of interest and we do not penalize the algorithm for a few wrong decisions, leading it to be overly pessimistic. Another difference is that previous work focuses either on individual notions of fairness or on i.i.d. inputs, while our work is about non-i.i.d. inputs in group notions of fairness. 1In contrast, in equalized odds, a misprediction only costs to the false negative metric if the label is positive. 2 Model Online learning protocol with group context. We consider the classical online learning setting of prediction with expert advice, where a learner needs to make sequential decisions for T rounds by combining the predictions of a finite set F of d hypotheses (also referred to as experts). We denote the outcome space by Y; in binary classification, this corresponds to Y = {+,−}. Additionally, we introduce a set of disjoint groups by G which identifies subsets of the population based on a protected attribute (such as gender, ethnicity, or income). The online learning protocol with group context proceeds in T rounds. Each round t is associated with a group context g(t) ∈ G and an outcome y(t) ∈ Y . We denote the resulting T -length timegroup-outcome sequence tuple by σ = {(t, g(t), y(t)) ∈ N× G × Y}Tt=1. This is a random variable that can depend on the randomness in the generation of the groups and the outcomes. We use the shorthand σ1:τ = {(t, g(t), y(t)) ∈ N× G × Y}τt=1 to denote the subsequence until round τ . The exact protocol for generating these sequences is described below. At round t = 1, 2, . . . , T : 1. An example with group context g(t) ∈ G arrives stochastically or is adversarially selected. 2. The learning algorithm or learner L commits to a probability distribution pt ∈ ∆(d) across experts where ptf denotes the probability that she follows the advice of expert f ∈ F at round t. This distribution pt can be a function of the sequence σ1:t−1. We call the learner group-unaware if she ignores the group context g(τ) for all τ ≤ t when selecting pt. 3. An adversary A then selects an outcome y(t) ∈ Y . The adversary is called adaptive if the groups/outcomes at round t = τ + 1 are a function of the realization of σ1:τ ; otherwise she is called oblivious. The adversary always has access to the learning algorithm, but an adaptive adversary additionally has access to the realized σ1:t−1 and hence also knows pt. Simultaneously, each expert f ∈ F makes a prediction ŷtf ∈ Ŷ , where Ŷ is a generic prediction space; for example, in binary classification, the prediction space could simply be the positive or negative labels: Ŷ = {+,−}, or the probabilistic score: Ŷ = [0, 1] with ŷtf interpreted as the probability the expert f ∈ F assigns to the positive label in round t, or even an uncalibrated score like the output of a support vector machine: Ŷ = R. Let ` : Ŷ ×Y → [0, 1] be the loss function between predictions and outcomes. This leads to a corresponding loss vector `t ∈ [0, 1]d where `tf = ` ( ŷtf , y(t) ) denotes the loss the learner incurs if she follows expert f ∈ F . 4. The learner then observes the entire loss vector `t (full information feedback) and incurs expected loss ∑ f∈F p t f ` t f . For classification, this feedback is obtained by observing y(t). In this paper, we consider a setting where all the experts f ∈ F are fair in isolation (formalized below). Regarding the group contexts, our main impossibility results (Theorems 1 and 2) assume that the group contexts g(t) arrive stochastically from a fixed distribution, while our positive result (Theorem 3) holds even when they are adversarially selected. For simplicity of notation, we assume throughout the presentation that the learner’s algorithm is producing the distribution pt of round t = τ + 1 deterministically based on σ1:τ and therefore all our expectations are taken only over σ which is the case in most algorithms. Our results extend when the algorithm uses extra randomness to select the distribution. Group fairness in online learning. We now define non-discrimination (or fairness) with respect to a particular evaluation metricM, e.g. in classification, the false negative rate metric (FNR) is the fraction of examples with positive outcome predicted negative incorrectly. For any realization of the time-group-outcome sequence σ and any group g ∈ G, metricM induces a subset of the population Sσg (M) that is relevant to it. For example, in classification, Sσg (FNR) = {t : g(t) = g, y(t) = +} is the set of positive examples of group g. The performance of expert f ∈ F on the subpopulation Sσg (M) is denoted byMσf (g) = 1|Sσg (M)| ∑ t∈Sσg (M) `tf . Definition 1. An expert f ∈ F is called fair in isolation with respect to metric M if, for every sequence σ, her performance with respect toM is the same across groups, i.e.Mσf (g) =Mσf (g′) for all g, g′ ∈ G. The learner’s performance on this subpopulation isMσL(g) = 1|Sσg (M)| ∑ t∈Sσg (M) ∑ f∈F p t f ` t f . To formalize our non-discrimination desiderata, we require the algorithm to have similar expected performance across groups, when given access to fair in isolation predictors. We make the following assumptions to avoid trivial impossibility results due to low-probability events or underrepresented populations. First, we take expectation over sequences generated by the adversary A (that has access to the learning algorithm L). Second, we require the relevant subpopulations to be, in expectation, large enough. Our positive results do not depend on either of these assumptions. More formally: Definition 2. Consider a set of experts F such that each expert is fair in isolation with respect to metricM. Learner L is called α-fair in composition with respect to metricM if, for all adversaries that produce Eσ[min(|Sσg (M)|, |Sσg′(M)|)] = Ω(T ) for all g, g′, it holds that: |Eσ[MσL(g)]− Eσ[MσL(g′)]| ≤ α. We note that, in many settings, we wish to have non-discrimination with respect to multiple metrics simultaneously. For instance, equalized odds requires fairness in composition both with respect to false negative rate and with respect to false positive rate (defined analogously). Since we provide an impossibility result for equalized odds, focusing on only one metric makes the result even stronger. Regret notions. The typical way to evaluate the performance of an algorithm in online learning is via the notion of regret. Regret is comparing the performance of the algorithm to the performance of the best expert in hindsight on the realized sequence σ as defined below: RegT = T∑ t=1 ∑ f∈F ptf ` t f − min f?∈F T∑ t=1 `tf? . In the above definition, regret is a random variable depending on the sequence σ; therefore depending on the randomness in groups/outcomes. An algorithm satisfies the no-regret property (or Hannan consistency) in our setting if for any losses realizable by the above protocol, the regret is sublinear in the time horizon T , i.e. RegT = o(T ). This property ensures that, as time goes by, the average regret vanishes. Many online learning algorithms, such as multiplicative weights updates satisfy this property with RegT = O( √ T log(d)). We focus on the notion of approximate regret, which is a relaxation of regret that gives a small multiplicative slack to the algorithm. More formally, -approximate regret with respect to expert f? ∈ F is defined as: ApxReg ,T (f ?) = T∑ t=1 ∑ f∈F ptf ` t f − (1 + ) T∑ t=1 `tf? . We note that typical algorithms guarantee ApxReg ,T (f ?) = O(ln(d)/ ) simultaneously for all experts f? ∈ F . When the time-horizon is known in advance, by setting = √ ln(d)/T , such a bound implies the aforementioned regret guarantee. In the case when the time horizon is not known, one can also obtain a similar guarantee by adjusting the learning rate of the algorithm appropriately. Our goal is to develop online learning algorithms that combine fair in isolation experts in order to achieve both vanishing average expected -approximate regret, i.e. for any fixed > 0 and f? ∈ F , Eσ[ApxReg ,T (f?)] = o(T ), and also non-discrimination with respect to fairness metrics of interest. 3 Impossibility results for equalized odds In this section, we study a popular group fairness notion, equalized odds, in the context of online learning. A natural extension of equalized odds for online settings would require that the false negative rate, i.e. percentage of positive examples predicted incorrectly, is the same across all groups and the same also holds for the false positive rate. We assume that our experts are fair in isolation with respect to both false negative as well as false positive rate. A weaker notion of equalized odds is equality of opportunity where the non-discrimination condition is required to be satisfied only for the false negative rate. We first study whether it is possible to achieve the vanishing regret property while guaranteeing α-fairness in composition with respect to false negative rate for arbitrarily small α. When the input is i.i.d., this is trivial as we can learn the best expert in O(log d) rounds and then follow its advice; since the expert is fair in isolation, this will guarantee vanishing non-discrimination. In contrast, we show that, in a non-i.i.d. online setting, this goal is unachievable. We demonstrate this in phenomenally benign settings where there are just two groups G = {A,B} that come from a fixed distribution and just two experts that are fair in isolation (with respect to false negative rate) even per round – not only ex post. Our first construction (Theorem 1) shows that any no-regret learning algorithm that is group-unaware cannot guarantee fairness in composition, even in instances that are perfectly balanced (each pair of label and group gets 1/4 of the examples) – the only adversarial component is the order in which these examples arrive. This is surprising because such a task is straightforward in the stochastic setting as all hypotheses are non-discriminatory. We then study whether actively using the group identity can correct the aforementioned similarly to how it enables correction against discriminatory predictors [20]. The answer is negative even in this scenario (Theorem 2): if the population is sufficiently not balanced, any no-regret learning algorithm will be unfair in composition with respect to false negative rate even if they are not group-unaware. Group-unaware algorithms. We first present the impossibility result for group-unaware algorithms. In our construction, the adversary is oblivious, there is perfect balance in groups (half of the population corresponds to each group), and perfect balance within group (half of the labels of each group are positive and half negative). Theorem 1. For all α < 3/8, there exists > 0 such that any group-unaware algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F is α-unfair in composition with respect to false negative rate even for perfectly balanced sequences. Proof sketch. Consider an instance that consists of two groups G = {A,B}, two experts F = {hn, hu}, and two phases: Phase I and Phase II. Group A is the group we end up discriminating against while group B is boosted by the discrimination with respect to false negative rate. At each round t the groups arrive stochastically with probability 1/2 each, independent of σ1:t−1. The experts output a score value in Ŷ = [0, 1], where score ŷtf ∈ Ŷ can be interpreted as the probability that expert f assigns to label being positive in round t, i.e. y(t) = +. The loss function is the expected probability of error given by `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The two experts are very simple: hn always predicts negative, i.e. ŷthn = 0 for all t, and hu is an unbiased expert who, irrespective of the group or the label, makes an inaccurate prediction with probability β = 1/4 + √ , i.e. ŷthu = β · 1{y(t) = −}+ (1− β) · 1{y(t) = +} for all t. Both experts are fair in isolation with respect to both false negative and false positive rates: FNR is 100% for hn and β for hu regardless the group, and FPR is 0% for hn and β for hu, again independent of the group. The instance proceeds in two phases: 1. Phase I lasts for T/2 rounds. The adversary assigns negative labels on examples with group context B and assigns a label uniformly at random to examples from group A. 2. In Phase II, there are two plausible worlds: (a) if the expected probability the algorithm assigns to expert hu in Phase I is at least Eσ [∑T/2 t=1 p t hu ] > √ · T then the adversary assigns negative labels for both groups (b) else the adversary assigns positive labels to examples with group context B while examples from group A keep receiving positive and negative labels with probability equal to half. We will show that for any algorithm with vanishing approximate regret property, i.e. with ApxReg ,T (f) = o(T ), the condition for the first world is never triggered and hence the above sequence is indeed balanced. We now show why this instance is unfair in composition with respect to false negative rate. The proof involves showing the following two claims, whose proofs we defer to the supplementary material. 1. In Phase I, any -approximate regret algorithm needs to select the negative expert hn most of the times to ensure small approximate regret with respect to hn. This means that, in Phase I (where we encounter half of the positive examples from group A and none from group B), the false negative rate of the algorithm is close to 1. 2. In Phase II, any -approximate regret algorithm should quickly catch up to ensure small approximate regret with respect to hu and hence the false negative rate of the algorithm is closer to β. Since the algorithm is group-unaware, this creates a mismatch between the false negative rate of B (that only receives false negatives in this phase) and A (that has also received many false negatives before). Group-aware algorithms. We now turn our attention to group-aware algorithms, that can use the group context of the example to select the probability of each expert and provide a similar impossibility result. There are three changes compared to the impossibility result we provided for group-unaware algorithms. First, the adversary is not oblivious but instead is adaptive. Second, we do not have perfect balance across populations but instead require that the minority population arrives with probability b < 0.49, while the majority population arrives with probability 1− b. Third, the labels are not equally distributed across positive and negative for each population but instead positive labels for one group are at least a c percentage of the total examples of the group for a small c > 0. Although the upper bounds on b and c are not optimized, our impossibility result cannot extend to b = c = 1/2. Understanding whether one can achieve fairness in composition for some values of b and c is an interesting open question. Our impossibility guarantee is the following: Theorem 2. For any group imbalance b < 0.49 and 0 < α < 0.49−0.99b1−b , there exists 0 > 0 such that for all 0 < < 0 any algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F , is α-unfair in composition. Proof sketch. The instance has two groups: G = {A,B}. Examples with group context A are discriminated against and arrive randomly with probability b < 1/2 while examples with group context B are boosted by the discrimination and arrive with the remaining probability 1− b. There are again two experts F = {hn, hp}, which output score values in Ŷ = [0, 1], where ŷtf can be interpreted as the probability that expert f assigns to label being + in round t. We use the earlier loss function of `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The first expert hn is again pessimistic and always predicts negative, i.e. ŷthn = 0, while the other expert hp is optimistic and always predicts positive, i.e. ŷthp = 1. These satisfy fairness in isolation with respect to equalized odds (false negative rate and false positive rate). Let c = 1/1012 denote the percentage of the input that is about positive examples for A, ensuring that |Sσg (FNR)| = Ω(T ). The instance proceeds in two phases. 1. Phase I lasts Θ · T rounds for Θ = 101c. The adversary assigns negative labels on examples with group context B. For examples with group context A, the adversary acts as following: • if the algorithm assigns probability on the negative expert below γ( ) = 99−2 100 , i.e. pthn(σ 1:t−1) < γ( ), then the adversary assigns negative label. • otherwise, the adversary assigns positive labels. 2. In Phase II, there are two plausible worlds: (a) the adversary assigns negative labels to both groups if the expected number of times that the algorithm selected the negative expert with probability higher than γ( ) on members of groupA is less than c·b·T , i.e. Eσ [ 1 { t ≤ Θ · T : g(t) = A, pthn ≥ γ( ) }] < c·b·T . (b) otherwise she assigns positive labels to examples with group context B and negative labels to examples with group context A. Note that, as before, the condition for the first world will never be triggered by any no-regret learning algorithm (we elaborate on that below) which ensures that Eσ |SσA(FNR)| ≥ c·b·T . The proof is based on the following claims, whose proofs are deferred to the supplementary material. 1. In Phase I, any vanishing approximate regret algorithm enters the second world of Phase II. 2. This implies a lower bound on the false negative rate on A, i.e. FNR(A) ≥ γ( ) = 99−2 100 . 3. In Phase II, any -approximate regret algorithm assigns large enough probability to expert hp for group B, implying an upper bound on the false negative rate on B, i.e. FNR(B) ≤ 1/2(1−b). Therefore this provides a gap in the false negative rates of at least α. 4 Fairness in composition with respect to an alternative metric The negative results of the previous section give rise to a natural question of whether fairness in composition can be achieved for some other fairness metric in an online setting. We answer this question positively by suggesting the equalized error rates metric EER which captures the average loss over the total number of examples (independent of whether this loss comes from false negative or false positive examples). The relevant subset induced by this metric Sσg (EER) is the set of all examples coming from group g ∈ G. We again assume that experts are fair in isolation with respect to equalized error rate and show that a simple scheme where we run separately one instance of multiplicative weights for each group achieves fairness in composition (Theorem 3). The result holds for general loss functions (beyond pure classification) and is robust to the experts only being approximately fair in isolation. A crucial property we use is that multiplicative weights not only does not perform worse than the best expert; it also does not perform better. In fact, this property holds more generally by online learning algorithms with optimal regret guarantees [14]. Interestingly, not all algorithms can achieve fairness in composition even with respect to this refined notion. We provide two algorithm classes where this is unachievable. First, we show that any algorithm (subject to a technical condition satisfied by algorithms such as multiplicative weights and follow the perturbed leader) that ignores the group identity can be unboundedly unfair with respect to equalized error rates (Theorem 4). This suggests that the algorithm needs to actively discriminate based on the groups to achieve fairness with respect to equalized error rates. Second, we show a similar negative statement for any algorithm that satisfies the more involved guarantee of small shifting regret, therefore outperforming the best expert (Theorem 5). This suggests that the algorithm used should be good but not too good. This result is, to the best of our knowledge, a first application where shifting regret may not be desirable which may be of independent interest. The positive result. We run separate instances of multiplicative weights with a fixed learning rate η, one for each group. More formally, for each pair of expert f ∈ F and group g ∈ G, we initialize weights w1f,g = 1. At round t = {1, 2, . . . , T}, an example with group context g(t) arrives and the learner selects a probability distribution based to the corresponding weights: ptf = wtf,g(t)∑ j∈F w t j,g(t) . Then the weights corresponding to group g(t) are updated exponentially: wt+1f,g = w t f,g ·(1−η) `tf ·1{g(t)=g}. Theorem 3. For any α > 0 and any < α such that running separate instances of multiplicative weights for each group with learning rate η = min( , α/6) guarantees α-fairness in composition and -approximate regret of at most O(|G| log(d)/ ). Proof sketch. The proof is based on the property that multiplicative weights performs not only no worse than the best expert in hindsight but also no better. Therefore the average performance of multiplicative weights at each group is approximately equal to the average performance of the best expert in that group. Since the experts are fair in isolation, the average performance of the best expert in all groups is the same which guarantees the equalized error rates desideratum. We make these arguments formal in the supplementary material. Remark 1. If the instance is instead only approximately fair in isolation with respect to equalized error rates, i.e. the error rates of the two experts are not exactly equal but within some constant κ, the same analysis implies (α+ κ)-fairness in composition with respect to equalized error rates. Impossibility results for group-unaware algorithms. In the previous algorithm, it was crucial that the examples of the one group do not interfere with the decisions of the algorithm on the other group. We show that, had we run one multiplicative weights algorithm in a group-unaware way, we would not have accomplished fairness in composition. In fact, this impossibility result holds for any algorithm with vanishing -approximate regret where the learning dynamic (pt at each round t) is a deterministic function of the difference between the cumumative losses of the experts (without taking into consideration their identity). This is satisfied, for instance by multiplicative weights and follow the perturbed leader with a constant learning rate. Unlike the previous section, the impossibility results for equalized error rates require groups to arrive adversarially (which also occurs in the above positive result). The proof of the following theorem is provided in the supplementary material. Theorem 4. For any α > 0 and for any > 0, running a single algorithm from the above class in a group-unaware way is α-unfair in composition with respect to equalized error rate. Impossibility results for shifting algorithms. The reader may be also wondering whether it suffices to just run separate learning algorithms in the two groups or whether multiplicative weights has a special property. In the following theorem, we show that the latter is the case. In particular, multiplicative weights has the property of not doing better than the best expert in hindsight. The main representative of algorithms that do not have such a property are the algorithms that achieve low approximate regret compared to a shifting benchmark (tracking the best expert). More formally, approximate regret against a shifting comparator f? = (f?(1), . . . , f?(T )) is defined as: ApxReg ,T (f ?) = ∑ t ptf ` t f − (1 + ) ∑ t `tf?(t), and typical guarantees are E[ApxReg(f?)] = O(K(f?)·ln(dT )/ ) where K(f?) = ∑T t=2 1{f?(t) 6= f?(t− 1)} is the number of switches in the comparator. We show that any algorithm that can achieve such a guarantee even when K(f?) = 2 does not satisfy fairness in composition with respect to equalized error rate. This indicates that, for the fairness with equalized error rates purpose, the algorithm not being too good is essential. This is established in the following theorem whose proof is deferred to the supplementary material. Theorem 5. For any α < 1/2 and > 0, any algorithm that can achieve the vanishing approximate regret property against shifting comparators f of length K(f) = 2, running separate instances of the algorithm for each group is α-unfair in composition with respect to equalized error rate. 5 Discussion In this paper, we introduce the study of avoiding discrimination towards protected groups in online settings with non-i.i.d. examples. Our impossibility results for equalized odds consist of only two phases, which highlights the challenge in correcting for historical biases in online decision making. Our work also opens up a quest towards definitions that are relevant and tractable in non-i.i.d. online settings for specific tasks. We introduce the notion of equalized error rates that can be a useful metric for non-discrimination in settings where all examples that contribute towards the performance also contribute towards fairness. This is the case in settings that all mistakes are similarly costly as is the case in speech recognition, recommender systems, or online advertising. However, we do not claim that its applicability is universal. For instance, consider college admission with two perfectly balanced groups that correspond to ethnicity (equal size of the two groups and equal number of positive and negatives within any group). A racist program organizer can select to admit all students of the one group and decline the students of the other, while satisfying equalized error rates – this does not satisfy equalized odds. Given the impossibility result we established for equalized odds, it is interesting to identify definitions that work well for different tasks one encounters in online non-i.i.d. settings. Moreover, although our positive results extend to the case where predictors are vetted to be approximately non-discriminatory, they do not say anything about the case where the predictors do not satisfy this property. We therefore view our work only as a first step towards understanding non-discrimination in non-i.i.d. online settings. Acknowledgements The authors would like to thank Manish Raghavan for useful discussions that improved the presentation of the paper. This work was supported by the NSF grants CCF-1800317 and CCF-1563714, as well as a Google Ph.D. Fellowship.
1. What is the focus of the paper regarding group fairness in online learning? 2. What are the strengths of the paper, particularly in its writing quality and contributions? 3. What are the weaknesses of the paper regarding its assumptions and practical usefulness? 4. How could the authors improve the paper by modifying their assumptions and providing more intuition?
Review
Review This paper studies the design of an algorithm for group fairness in online learning. This setup is more realistic than online learning for individual fairness and batch group fairness. The paper is very well written and the contributions are clear. However, I think the assumption that all the predictors are individually non-discriminatory comes a little strong. Although it is fine to use this strong assumption for the impossibility results, I did not find the other two results (the positive ones about the same error rates) practically useful. I think the authors could replace this assumption by making a weaker adversary. For example, assuming that labels come from a distribution instead of adversely getting selected. I think adding more intuition about these two assumptions would help a lot. 1) Assuming every predictor is individually non-discriminatory. 2) Assuming the adversary can adaptively choose the labels.
NIPS
Title On preserving non-discrimination when combining expert advice Abstract We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider the most basic extension of classical online learning: Given a class of predictors that are individually non-discriminatory with respect to a particular metric, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? Surprisingly we show that this task is unachievable for the prevalent notion of equalized odds that requires equal false negative rates and equal false positive rates across groups. On the positive side, for another notion of non-discrimination, equalized error rates, we show that running separate instances of the classical multiplicative weights algorithm for each group achieves this guarantee. Interestingly, even for this notion, we show that algorithms with stronger performance guarantees than multiplicative weights cannot preserve non-discrimination. 1 Introduction The emergence of machine learning in the last decade has given rise to an important debate regarding the ethical and societal responsibility of its offspring. Machine learning has provided a universal toolbox enhancing the decision making in many disciplines from advertising and recommender systems to education and criminal justice. Unfortunately, both the data and their processing can be biased against specific population groups (even inadvertently) in every single step of the process [4]. This has generated societal and policy interest in understanding the sources of this discrimination and interdisciplinary research has attempted to mitigate its shortcomings. Discrimination is commonly an issue in applications where decisions need to be made sequentially. The most prominent such application is online advertising where platforms need to sequentially select which ad to display in response to particular query searches. This process can introduce discrimination against protected groups in many ways such as filtering particular alternatives [12, 2] and reinforcing existing stereotypes through search results [38, 25]. Another canonical example of sequential decision making is medical trials where underexploration on female groups often leads to significantly worse treatments for them [31]. Similar issues occur in image classification as stressed by “gender shades” [7]. The reverse (overexploration in minority populations) can also cause concerns especially if conducted in a non-transparent fashion [5]. In these sequential settings, the assumption that data are i.i.d. is often violated. Online advertising, recommender systems, medical trials, image classification, loan decisions, criminal recidivism all require decisions to be made sequentially. The corresponding labels are not identical across time and can be affected by the economy, recent events, etc. Similarly labels are also not independent across rounds – if a bank offers a loan then this decision can affect whether the loanee or their environment will be able to repay future loans thereby affecting future labels as discussed by Liu et al. [32]. As a result, it is important to understand the effect of this adaptivity on non-discrimination. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The classical way to model settings that are not i.i.d. is via adversarial online learning [30, 17], which poses the question: Given a class F of predictors, how can we make online predictions that perform as well as the best predictor from F in hindsight? The most basic online learning question (answered via the celebrated “multiplicative weights” algorithm) concerns competing with a finite set of predictors. The class F is typically referred to as “experts” and can be thought as “features” of the example where we want to make online predictions that compete with the best 1-sparse predictor. In this work, we wish to understand the interplay between adaptivity and non-discrimination and therefore consider the most basic extension of the classical online learning question: Given a class of individually non-discriminatory predictors, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? The assumption that predictors are individually non-discriminatory is a strong assumption on the predictors and makes the task trivial in the batch setting where the algorithm is given labeled examples and wishes to perform well on unseen examples drawn from the same distribution. This happens because the algorithm can learn the best predictor from the labeled examples and then follow it (since this predictor is individually non-discriminatory, the algorithm does not exhibit discrimination). This enables us to understand the potential overhead that adaptivity is causing and significantly strengthens any impossibility result. Moreover, we can assume that predictors have been individually vetted to satisfy the non-discrimination desiderata – we therefore wish to understand how to efficiently compose these non-discriminatory predictors while preserving non-discrimination. 1.1 Our contribution Our impossibility results for equalized odds. Surprisingly, we show that for a prevalent notion of non-discrimination, equalized odds, it is impossible to preserve non-discrimination while also competing comparably to the best predictor in hindsight (no-regret property). Equalized odds, suggested by Hardt et al. [20] in the batch setting, restricts the set of allowed predictors requiring that, when examples come from different groups, the prediction is independent to the group conditioned on the label. In binary classification, this means that the false negative rate (fraction of positive examples predicted negative) is equal across groups and the same holds for the false positive rate (defined analogously). This notion was popularized by a recent debate on potential bias of machine learning risk tools for criminal recividism [1, 10, 28, 16]. Our impossibility results demonstrate that the order in which examples arrive significantly complicates the task of achieving desired efficiency while preserving non-discrimination with respect to equalized odds. In particular, we show that any algorithm agnostic to the group identity either cannot achieve performance comparable to the best predictor or exhibits discrimination in some instances (Theorem 1). This occurs in phenomenally simple settings with only two individually non-discriminatory predictors, two groups, and perfectly balanced instances: groups are of equal size and each receives equal number of positive and negative labels. The only imbalance exists in the order in which the labels arrive which is also relatively well behaved – labels are generated from two i.i.d. distributions, one in the first half of the instance and one in the second half. Although in many settings we cannot actively use the group identity of the examples due to legal reasons (e.g., in hiring), one may wonder whether these impossibility results disappear if we can actively use the group information to compensate for past mistakes. We show that this is also not the case (Theorem 2). Although our groups are not perfectly balanced, the construction is again very simple and consists only of two groups and two predictors: one always predicting positive and one always predicting negative. The simplicity of the settings, combined with the very strong assumption on the predictors being individually non-discriminatory speaks to the trade-off between adaptivity and non-discrimination with respect to equalized odds. Our results for equalized error rates. The strong impossibility results with respect to equalized odds invite the natural question of whether there exists some alternative fairness notion that, given access to non-discriminatory predictors, achieves efficiency while preserving non-discrimination. We answer the above positively by suggesting the notion of equalized error rates, which requires that the average expected loss (regardless whether it stems from false positives or false negatives) encountered by each group should be the same. This notion makes sense in settings where performance and fairness are measured with respect to the same objective. Consider a medical application where people from different subpopulations wish to receive appropriate treatment and any error in treatment costs equally both towards performance and towards fairness.1 It is morally objectionable to discriminate against one group, e.g. based on race, using it as experimentation to enhance the quality of service of the other, and it is reasonable to require that all subpopulations receive same quality of service. For this notion, we show that, if all predictors are individually non-discriminatory with respect to equalized error rates, running separate multiplicative weights algorithms, one for each subpopulation, preserves this non-discrimination without decay in the efficiency (Theorem 3). The key property we use is that the multiplicative weights algorithm guarantees to perform not only no worse than the best predictor in hindsight but also no better; this property holds for a broader class of algorithms [14]. Our result applies to general loss functions beyond binary predictions and only requires predictors to satisfy the weakened assumption of being approximately non-discriminatory. Finally, we examine whether the decisions of running separate algorithms and running this particular not so efficient algorithm were important for the result. We first give evidence that running separate algorithms is essential for the result; if we run a single instance of “multiplicative weights” or “follow the perturbed leader”, we cannot guarantee non-discrimination with respect to equalized error rates (Theorem 4). We then suggest that the property of not performing better than the best predictor is also crucial; in particular, better algorithms that satisfy the stronger guarantee of low shifting regret [21, 6, 34] are also not able to guarantee this non-discrimination (Theorem 5). These algorithms are considered superior to classical no-regret algorithms as they can better adapt to changes in the environment, which has nice implications in game-theoretic settings [35]. Our latter impossibility result is a first application where having these strong guarantees against changing benchmarks is not necessarily desired and therefore is of independent learning-theoretic interest. 1.2 Related work There is a large line of work on fairness and non-discrimination in machine learning (see [36, 8, 13, 41, 22, 20, 10, 28, 26] for a non-exhaustive list). We elaborate on works that either study group notions of fairness or fairness in online learning. The last decade has seen a lot of work on group notions of fairness, mostly in classification setting. Examples include notions that compare the percentage of members predicted positive such as demographic parity [8], disparate impact [15], equalized odds [20] and calibration across groups [10, 28]. There is no consensus on a universal fairness notion; rather the specific notion considered is largely task-specific. In fact, previous works identified that these notions are often not compatible to each other [10, 28], posed concerns that they may introduce unintentional discrimination [11], and suggested the need to go beyond such observational criteria via causal reasoning [27, 29]. Prior to our work, group fairness notions have been studied primarily in the batch learning setting with the goal of optimizing a loss function subject to a fairness constraint either in a post-hoc correction framework as proposed by Hardt et al. [20] or more directly during training from batch data [41, 19, 39, 40, 3] which requires care due to the predictors being discriminatory with respect to the particular metric of interest. The setting we focus on in this paper does not have the challenges of the above since all predictors are non-discriminatory; however, we obtain surprising impossibility results due to the ordering in which labels arrive. Recently fairness in online learning has also started receiving attention. One line of work focuses on imposing a particular fairness guarantee at all times for bandits and contextual bandits, either for individual fairness [22, 23] or for group fairness [9]. Another line of work points to counterintuitive externalities of using contextual bandit algorithms agnostic to the group identity and suggest that heterogeneity in data can replace the need for exploration [37, 24]. Moreover, following a seminal paper by Dwork et al. [13], a line of work aims to treat similar people similarly in online settings [33, 18]. Our work distinguishes itself from these directions mainly in the objective, since we require the non-discrimination to happen in the long-term instead of at any time; this extends the classical batch definitions of non-discrimination in the online setting. In particular, we only focus on situations where there are enough samples from each population of interest and we do not penalize the algorithm for a few wrong decisions, leading it to be overly pessimistic. Another difference is that previous work focuses either on individual notions of fairness or on i.i.d. inputs, while our work is about non-i.i.d. inputs in group notions of fairness. 1In contrast, in equalized odds, a misprediction only costs to the false negative metric if the label is positive. 2 Model Online learning protocol with group context. We consider the classical online learning setting of prediction with expert advice, where a learner needs to make sequential decisions for T rounds by combining the predictions of a finite set F of d hypotheses (also referred to as experts). We denote the outcome space by Y; in binary classification, this corresponds to Y = {+,−}. Additionally, we introduce a set of disjoint groups by G which identifies subsets of the population based on a protected attribute (such as gender, ethnicity, or income). The online learning protocol with group context proceeds in T rounds. Each round t is associated with a group context g(t) ∈ G and an outcome y(t) ∈ Y . We denote the resulting T -length timegroup-outcome sequence tuple by σ = {(t, g(t), y(t)) ∈ N× G × Y}Tt=1. This is a random variable that can depend on the randomness in the generation of the groups and the outcomes. We use the shorthand σ1:τ = {(t, g(t), y(t)) ∈ N× G × Y}τt=1 to denote the subsequence until round τ . The exact protocol for generating these sequences is described below. At round t = 1, 2, . . . , T : 1. An example with group context g(t) ∈ G arrives stochastically or is adversarially selected. 2. The learning algorithm or learner L commits to a probability distribution pt ∈ ∆(d) across experts where ptf denotes the probability that she follows the advice of expert f ∈ F at round t. This distribution pt can be a function of the sequence σ1:t−1. We call the learner group-unaware if she ignores the group context g(τ) for all τ ≤ t when selecting pt. 3. An adversary A then selects an outcome y(t) ∈ Y . The adversary is called adaptive if the groups/outcomes at round t = τ + 1 are a function of the realization of σ1:τ ; otherwise she is called oblivious. The adversary always has access to the learning algorithm, but an adaptive adversary additionally has access to the realized σ1:t−1 and hence also knows pt. Simultaneously, each expert f ∈ F makes a prediction ŷtf ∈ Ŷ , where Ŷ is a generic prediction space; for example, in binary classification, the prediction space could simply be the positive or negative labels: Ŷ = {+,−}, or the probabilistic score: Ŷ = [0, 1] with ŷtf interpreted as the probability the expert f ∈ F assigns to the positive label in round t, or even an uncalibrated score like the output of a support vector machine: Ŷ = R. Let ` : Ŷ ×Y → [0, 1] be the loss function between predictions and outcomes. This leads to a corresponding loss vector `t ∈ [0, 1]d where `tf = ` ( ŷtf , y(t) ) denotes the loss the learner incurs if she follows expert f ∈ F . 4. The learner then observes the entire loss vector `t (full information feedback) and incurs expected loss ∑ f∈F p t f ` t f . For classification, this feedback is obtained by observing y(t). In this paper, we consider a setting where all the experts f ∈ F are fair in isolation (formalized below). Regarding the group contexts, our main impossibility results (Theorems 1 and 2) assume that the group contexts g(t) arrive stochastically from a fixed distribution, while our positive result (Theorem 3) holds even when they are adversarially selected. For simplicity of notation, we assume throughout the presentation that the learner’s algorithm is producing the distribution pt of round t = τ + 1 deterministically based on σ1:τ and therefore all our expectations are taken only over σ which is the case in most algorithms. Our results extend when the algorithm uses extra randomness to select the distribution. Group fairness in online learning. We now define non-discrimination (or fairness) with respect to a particular evaluation metricM, e.g. in classification, the false negative rate metric (FNR) is the fraction of examples with positive outcome predicted negative incorrectly. For any realization of the time-group-outcome sequence σ and any group g ∈ G, metricM induces a subset of the population Sσg (M) that is relevant to it. For example, in classification, Sσg (FNR) = {t : g(t) = g, y(t) = +} is the set of positive examples of group g. The performance of expert f ∈ F on the subpopulation Sσg (M) is denoted byMσf (g) = 1|Sσg (M)| ∑ t∈Sσg (M) `tf . Definition 1. An expert f ∈ F is called fair in isolation with respect to metric M if, for every sequence σ, her performance with respect toM is the same across groups, i.e.Mσf (g) =Mσf (g′) for all g, g′ ∈ G. The learner’s performance on this subpopulation isMσL(g) = 1|Sσg (M)| ∑ t∈Sσg (M) ∑ f∈F p t f ` t f . To formalize our non-discrimination desiderata, we require the algorithm to have similar expected performance across groups, when given access to fair in isolation predictors. We make the following assumptions to avoid trivial impossibility results due to low-probability events or underrepresented populations. First, we take expectation over sequences generated by the adversary A (that has access to the learning algorithm L). Second, we require the relevant subpopulations to be, in expectation, large enough. Our positive results do not depend on either of these assumptions. More formally: Definition 2. Consider a set of experts F such that each expert is fair in isolation with respect to metricM. Learner L is called α-fair in composition with respect to metricM if, for all adversaries that produce Eσ[min(|Sσg (M)|, |Sσg′(M)|)] = Ω(T ) for all g, g′, it holds that: |Eσ[MσL(g)]− Eσ[MσL(g′)]| ≤ α. We note that, in many settings, we wish to have non-discrimination with respect to multiple metrics simultaneously. For instance, equalized odds requires fairness in composition both with respect to false negative rate and with respect to false positive rate (defined analogously). Since we provide an impossibility result for equalized odds, focusing on only one metric makes the result even stronger. Regret notions. The typical way to evaluate the performance of an algorithm in online learning is via the notion of regret. Regret is comparing the performance of the algorithm to the performance of the best expert in hindsight on the realized sequence σ as defined below: RegT = T∑ t=1 ∑ f∈F ptf ` t f − min f?∈F T∑ t=1 `tf? . In the above definition, regret is a random variable depending on the sequence σ; therefore depending on the randomness in groups/outcomes. An algorithm satisfies the no-regret property (or Hannan consistency) in our setting if for any losses realizable by the above protocol, the regret is sublinear in the time horizon T , i.e. RegT = o(T ). This property ensures that, as time goes by, the average regret vanishes. Many online learning algorithms, such as multiplicative weights updates satisfy this property with RegT = O( √ T log(d)). We focus on the notion of approximate regret, which is a relaxation of regret that gives a small multiplicative slack to the algorithm. More formally, -approximate regret with respect to expert f? ∈ F is defined as: ApxReg ,T (f ?) = T∑ t=1 ∑ f∈F ptf ` t f − (1 + ) T∑ t=1 `tf? . We note that typical algorithms guarantee ApxReg ,T (f ?) = O(ln(d)/ ) simultaneously for all experts f? ∈ F . When the time-horizon is known in advance, by setting = √ ln(d)/T , such a bound implies the aforementioned regret guarantee. In the case when the time horizon is not known, one can also obtain a similar guarantee by adjusting the learning rate of the algorithm appropriately. Our goal is to develop online learning algorithms that combine fair in isolation experts in order to achieve both vanishing average expected -approximate regret, i.e. for any fixed > 0 and f? ∈ F , Eσ[ApxReg ,T (f?)] = o(T ), and also non-discrimination with respect to fairness metrics of interest. 3 Impossibility results for equalized odds In this section, we study a popular group fairness notion, equalized odds, in the context of online learning. A natural extension of equalized odds for online settings would require that the false negative rate, i.e. percentage of positive examples predicted incorrectly, is the same across all groups and the same also holds for the false positive rate. We assume that our experts are fair in isolation with respect to both false negative as well as false positive rate. A weaker notion of equalized odds is equality of opportunity where the non-discrimination condition is required to be satisfied only for the false negative rate. We first study whether it is possible to achieve the vanishing regret property while guaranteeing α-fairness in composition with respect to false negative rate for arbitrarily small α. When the input is i.i.d., this is trivial as we can learn the best expert in O(log d) rounds and then follow its advice; since the expert is fair in isolation, this will guarantee vanishing non-discrimination. In contrast, we show that, in a non-i.i.d. online setting, this goal is unachievable. We demonstrate this in phenomenally benign settings where there are just two groups G = {A,B} that come from a fixed distribution and just two experts that are fair in isolation (with respect to false negative rate) even per round – not only ex post. Our first construction (Theorem 1) shows that any no-regret learning algorithm that is group-unaware cannot guarantee fairness in composition, even in instances that are perfectly balanced (each pair of label and group gets 1/4 of the examples) – the only adversarial component is the order in which these examples arrive. This is surprising because such a task is straightforward in the stochastic setting as all hypotheses are non-discriminatory. We then study whether actively using the group identity can correct the aforementioned similarly to how it enables correction against discriminatory predictors [20]. The answer is negative even in this scenario (Theorem 2): if the population is sufficiently not balanced, any no-regret learning algorithm will be unfair in composition with respect to false negative rate even if they are not group-unaware. Group-unaware algorithms. We first present the impossibility result for group-unaware algorithms. In our construction, the adversary is oblivious, there is perfect balance in groups (half of the population corresponds to each group), and perfect balance within group (half of the labels of each group are positive and half negative). Theorem 1. For all α < 3/8, there exists > 0 such that any group-unaware algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F is α-unfair in composition with respect to false negative rate even for perfectly balanced sequences. Proof sketch. Consider an instance that consists of two groups G = {A,B}, two experts F = {hn, hu}, and two phases: Phase I and Phase II. Group A is the group we end up discriminating against while group B is boosted by the discrimination with respect to false negative rate. At each round t the groups arrive stochastically with probability 1/2 each, independent of σ1:t−1. The experts output a score value in Ŷ = [0, 1], where score ŷtf ∈ Ŷ can be interpreted as the probability that expert f assigns to label being positive in round t, i.e. y(t) = +. The loss function is the expected probability of error given by `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The two experts are very simple: hn always predicts negative, i.e. ŷthn = 0 for all t, and hu is an unbiased expert who, irrespective of the group or the label, makes an inaccurate prediction with probability β = 1/4 + √ , i.e. ŷthu = β · 1{y(t) = −}+ (1− β) · 1{y(t) = +} for all t. Both experts are fair in isolation with respect to both false negative and false positive rates: FNR is 100% for hn and β for hu regardless the group, and FPR is 0% for hn and β for hu, again independent of the group. The instance proceeds in two phases: 1. Phase I lasts for T/2 rounds. The adversary assigns negative labels on examples with group context B and assigns a label uniformly at random to examples from group A. 2. In Phase II, there are two plausible worlds: (a) if the expected probability the algorithm assigns to expert hu in Phase I is at least Eσ [∑T/2 t=1 p t hu ] > √ · T then the adversary assigns negative labels for both groups (b) else the adversary assigns positive labels to examples with group context B while examples from group A keep receiving positive and negative labels with probability equal to half. We will show that for any algorithm with vanishing approximate regret property, i.e. with ApxReg ,T (f) = o(T ), the condition for the first world is never triggered and hence the above sequence is indeed balanced. We now show why this instance is unfair in composition with respect to false negative rate. The proof involves showing the following two claims, whose proofs we defer to the supplementary material. 1. In Phase I, any -approximate regret algorithm needs to select the negative expert hn most of the times to ensure small approximate regret with respect to hn. This means that, in Phase I (where we encounter half of the positive examples from group A and none from group B), the false negative rate of the algorithm is close to 1. 2. In Phase II, any -approximate regret algorithm should quickly catch up to ensure small approximate regret with respect to hu and hence the false negative rate of the algorithm is closer to β. Since the algorithm is group-unaware, this creates a mismatch between the false negative rate of B (that only receives false negatives in this phase) and A (that has also received many false negatives before). Group-aware algorithms. We now turn our attention to group-aware algorithms, that can use the group context of the example to select the probability of each expert and provide a similar impossibility result. There are three changes compared to the impossibility result we provided for group-unaware algorithms. First, the adversary is not oblivious but instead is adaptive. Second, we do not have perfect balance across populations but instead require that the minority population arrives with probability b < 0.49, while the majority population arrives with probability 1− b. Third, the labels are not equally distributed across positive and negative for each population but instead positive labels for one group are at least a c percentage of the total examples of the group for a small c > 0. Although the upper bounds on b and c are not optimized, our impossibility result cannot extend to b = c = 1/2. Understanding whether one can achieve fairness in composition for some values of b and c is an interesting open question. Our impossibility guarantee is the following: Theorem 2. For any group imbalance b < 0.49 and 0 < α < 0.49−0.99b1−b , there exists 0 > 0 such that for all 0 < < 0 any algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F , is α-unfair in composition. Proof sketch. The instance has two groups: G = {A,B}. Examples with group context A are discriminated against and arrive randomly with probability b < 1/2 while examples with group context B are boosted by the discrimination and arrive with the remaining probability 1− b. There are again two experts F = {hn, hp}, which output score values in Ŷ = [0, 1], where ŷtf can be interpreted as the probability that expert f assigns to label being + in round t. We use the earlier loss function of `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The first expert hn is again pessimistic and always predicts negative, i.e. ŷthn = 0, while the other expert hp is optimistic and always predicts positive, i.e. ŷthp = 1. These satisfy fairness in isolation with respect to equalized odds (false negative rate and false positive rate). Let c = 1/1012 denote the percentage of the input that is about positive examples for A, ensuring that |Sσg (FNR)| = Ω(T ). The instance proceeds in two phases. 1. Phase I lasts Θ · T rounds for Θ = 101c. The adversary assigns negative labels on examples with group context B. For examples with group context A, the adversary acts as following: • if the algorithm assigns probability on the negative expert below γ( ) = 99−2 100 , i.e. pthn(σ 1:t−1) < γ( ), then the adversary assigns negative label. • otherwise, the adversary assigns positive labels. 2. In Phase II, there are two plausible worlds: (a) the adversary assigns negative labels to both groups if the expected number of times that the algorithm selected the negative expert with probability higher than γ( ) on members of groupA is less than c·b·T , i.e. Eσ [ 1 { t ≤ Θ · T : g(t) = A, pthn ≥ γ( ) }] < c·b·T . (b) otherwise she assigns positive labels to examples with group context B and negative labels to examples with group context A. Note that, as before, the condition for the first world will never be triggered by any no-regret learning algorithm (we elaborate on that below) which ensures that Eσ |SσA(FNR)| ≥ c·b·T . The proof is based on the following claims, whose proofs are deferred to the supplementary material. 1. In Phase I, any vanishing approximate regret algorithm enters the second world of Phase II. 2. This implies a lower bound on the false negative rate on A, i.e. FNR(A) ≥ γ( ) = 99−2 100 . 3. In Phase II, any -approximate regret algorithm assigns large enough probability to expert hp for group B, implying an upper bound on the false negative rate on B, i.e. FNR(B) ≤ 1/2(1−b). Therefore this provides a gap in the false negative rates of at least α. 4 Fairness in composition with respect to an alternative metric The negative results of the previous section give rise to a natural question of whether fairness in composition can be achieved for some other fairness metric in an online setting. We answer this question positively by suggesting the equalized error rates metric EER which captures the average loss over the total number of examples (independent of whether this loss comes from false negative or false positive examples). The relevant subset induced by this metric Sσg (EER) is the set of all examples coming from group g ∈ G. We again assume that experts are fair in isolation with respect to equalized error rate and show that a simple scheme where we run separately one instance of multiplicative weights for each group achieves fairness in composition (Theorem 3). The result holds for general loss functions (beyond pure classification) and is robust to the experts only being approximately fair in isolation. A crucial property we use is that multiplicative weights not only does not perform worse than the best expert; it also does not perform better. In fact, this property holds more generally by online learning algorithms with optimal regret guarantees [14]. Interestingly, not all algorithms can achieve fairness in composition even with respect to this refined notion. We provide two algorithm classes where this is unachievable. First, we show that any algorithm (subject to a technical condition satisfied by algorithms such as multiplicative weights and follow the perturbed leader) that ignores the group identity can be unboundedly unfair with respect to equalized error rates (Theorem 4). This suggests that the algorithm needs to actively discriminate based on the groups to achieve fairness with respect to equalized error rates. Second, we show a similar negative statement for any algorithm that satisfies the more involved guarantee of small shifting regret, therefore outperforming the best expert (Theorem 5). This suggests that the algorithm used should be good but not too good. This result is, to the best of our knowledge, a first application where shifting regret may not be desirable which may be of independent interest. The positive result. We run separate instances of multiplicative weights with a fixed learning rate η, one for each group. More formally, for each pair of expert f ∈ F and group g ∈ G, we initialize weights w1f,g = 1. At round t = {1, 2, . . . , T}, an example with group context g(t) arrives and the learner selects a probability distribution based to the corresponding weights: ptf = wtf,g(t)∑ j∈F w t j,g(t) . Then the weights corresponding to group g(t) are updated exponentially: wt+1f,g = w t f,g ·(1−η) `tf ·1{g(t)=g}. Theorem 3. For any α > 0 and any < α such that running separate instances of multiplicative weights for each group with learning rate η = min( , α/6) guarantees α-fairness in composition and -approximate regret of at most O(|G| log(d)/ ). Proof sketch. The proof is based on the property that multiplicative weights performs not only no worse than the best expert in hindsight but also no better. Therefore the average performance of multiplicative weights at each group is approximately equal to the average performance of the best expert in that group. Since the experts are fair in isolation, the average performance of the best expert in all groups is the same which guarantees the equalized error rates desideratum. We make these arguments formal in the supplementary material. Remark 1. If the instance is instead only approximately fair in isolation with respect to equalized error rates, i.e. the error rates of the two experts are not exactly equal but within some constant κ, the same analysis implies (α+ κ)-fairness in composition with respect to equalized error rates. Impossibility results for group-unaware algorithms. In the previous algorithm, it was crucial that the examples of the one group do not interfere with the decisions of the algorithm on the other group. We show that, had we run one multiplicative weights algorithm in a group-unaware way, we would not have accomplished fairness in composition. In fact, this impossibility result holds for any algorithm with vanishing -approximate regret where the learning dynamic (pt at each round t) is a deterministic function of the difference between the cumumative losses of the experts (without taking into consideration their identity). This is satisfied, for instance by multiplicative weights and follow the perturbed leader with a constant learning rate. Unlike the previous section, the impossibility results for equalized error rates require groups to arrive adversarially (which also occurs in the above positive result). The proof of the following theorem is provided in the supplementary material. Theorem 4. For any α > 0 and for any > 0, running a single algorithm from the above class in a group-unaware way is α-unfair in composition with respect to equalized error rate. Impossibility results for shifting algorithms. The reader may be also wondering whether it suffices to just run separate learning algorithms in the two groups or whether multiplicative weights has a special property. In the following theorem, we show that the latter is the case. In particular, multiplicative weights has the property of not doing better than the best expert in hindsight. The main representative of algorithms that do not have such a property are the algorithms that achieve low approximate regret compared to a shifting benchmark (tracking the best expert). More formally, approximate regret against a shifting comparator f? = (f?(1), . . . , f?(T )) is defined as: ApxReg ,T (f ?) = ∑ t ptf ` t f − (1 + ) ∑ t `tf?(t), and typical guarantees are E[ApxReg(f?)] = O(K(f?)·ln(dT )/ ) where K(f?) = ∑T t=2 1{f?(t) 6= f?(t− 1)} is the number of switches in the comparator. We show that any algorithm that can achieve such a guarantee even when K(f?) = 2 does not satisfy fairness in composition with respect to equalized error rate. This indicates that, for the fairness with equalized error rates purpose, the algorithm not being too good is essential. This is established in the following theorem whose proof is deferred to the supplementary material. Theorem 5. For any α < 1/2 and > 0, any algorithm that can achieve the vanishing approximate regret property against shifting comparators f of length K(f) = 2, running separate instances of the algorithm for each group is α-unfair in composition with respect to equalized error rate. 5 Discussion In this paper, we introduce the study of avoiding discrimination towards protected groups in online settings with non-i.i.d. examples. Our impossibility results for equalized odds consist of only two phases, which highlights the challenge in correcting for historical biases in online decision making. Our work also opens up a quest towards definitions that are relevant and tractable in non-i.i.d. online settings for specific tasks. We introduce the notion of equalized error rates that can be a useful metric for non-discrimination in settings where all examples that contribute towards the performance also contribute towards fairness. This is the case in settings that all mistakes are similarly costly as is the case in speech recognition, recommender systems, or online advertising. However, we do not claim that its applicability is universal. For instance, consider college admission with two perfectly balanced groups that correspond to ethnicity (equal size of the two groups and equal number of positive and negatives within any group). A racist program organizer can select to admit all students of the one group and decline the students of the other, while satisfying equalized error rates – this does not satisfy equalized odds. Given the impossibility result we established for equalized odds, it is interesting to identify definitions that work well for different tasks one encounters in online non-i.i.d. settings. Moreover, although our positive results extend to the case where predictors are vetted to be approximately non-discriminatory, they do not say anything about the case where the predictors do not satisfy this property. We therefore view our work only as a first step towards understanding non-discrimination in non-i.i.d. online settings. Acknowledgements The authors would like to thank Manish Raghavan for useful discussions that improved the presentation of the paper. This work was supported by the NSF grants CCF-1800317 and CCF-1563714, as well as a Google Ph.D. Fellowship.
1. What is the main contribution of the paper regarding fairness in online learning? 2. What are the strengths and weaknesses of the paper's theoretical analysis? 3. Do you have any questions or concerns about the paper's definition of online learning and its relation to machine learning? 4. How does the reviewer assess the relevance and impact of the paper's findings in real-world scenarios? 5. Are there any suggestions for improving the clarity and context of the paper's presentation?
Review
Review The paper theoretically studies the suitability of achieving a particular definition of fairness, equalized odds (which relates to the false positive rate), in the context of online learning with experts advise (Cesa-Bianchi et al. 2006). In particular, the authors show that achieving an online algorithm that jointly satisfies zero-regret and equalized odds is not possible. Afterward, they show that this is not the case when considering fairness in terms of the total number of errors per group. They also discuss that unfortunately this definition of fairness (also previously discussed in Zafar et al., 2017) is not realistic (or even fair) in many real-world scenarios. In the positive side, I believe that (im)possibility theoretical studies on when a fairness definition can be accomplished is definitely a major contribution to the field. However, I also believe that the paper has important gaps to be filled: 1) Their definition of online learning comes from the game theory literature and does not corresponds to the standard ML view on online learning. However, the authors do not clarify this particular setting in the abstract (neither the title of the paper) and provide any reference to the considered "benign setting"--where at time t there is a set of expert providing their advise about the decision to be made, and the learner select one expert advise (decision) with a fixed probability i~p^t, getting a loss l(t,i) that depends on the selected expert. Is this the actual setting? Please clarify this point in the paper, and add the necessary references. 2) Although the paper is presented in the context of fairness, which is definitely a real problem, the authors do not provide a single real-world example where their setting (based on the game theoretical game benign) would fit. As a consequence, it is hard to evaluate the potential impact of the presented theoretical results in the field. In summary, I believe although the paper presents good ideas and results, it does not provide the necessary context and details to judge the contribution of the paper. Cesa-Bianchi, Nicolo, and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006. B. Zafar, I. Valera, M. Gomez-Rodriguez and K. Gummadi, "Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment", (Full paper) 26th International World Wide Web Conference (WWW), Perth (Australia), April 2017.
NIPS
Title On preserving non-discrimination when combining expert advice Abstract We study the interplay between sequential decision making and avoiding discrimination against protected groups, when examples arrive online and do not follow distributional assumptions. We consider the most basic extension of classical online learning: Given a class of predictors that are individually non-discriminatory with respect to a particular metric, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? Surprisingly we show that this task is unachievable for the prevalent notion of equalized odds that requires equal false negative rates and equal false positive rates across groups. On the positive side, for another notion of non-discrimination, equalized error rates, we show that running separate instances of the classical multiplicative weights algorithm for each group achieves this guarantee. Interestingly, even for this notion, we show that algorithms with stronger performance guarantees than multiplicative weights cannot preserve non-discrimination. 1 Introduction The emergence of machine learning in the last decade has given rise to an important debate regarding the ethical and societal responsibility of its offspring. Machine learning has provided a universal toolbox enhancing the decision making in many disciplines from advertising and recommender systems to education and criminal justice. Unfortunately, both the data and their processing can be biased against specific population groups (even inadvertently) in every single step of the process [4]. This has generated societal and policy interest in understanding the sources of this discrimination and interdisciplinary research has attempted to mitigate its shortcomings. Discrimination is commonly an issue in applications where decisions need to be made sequentially. The most prominent such application is online advertising where platforms need to sequentially select which ad to display in response to particular query searches. This process can introduce discrimination against protected groups in many ways such as filtering particular alternatives [12, 2] and reinforcing existing stereotypes through search results [38, 25]. Another canonical example of sequential decision making is medical trials where underexploration on female groups often leads to significantly worse treatments for them [31]. Similar issues occur in image classification as stressed by “gender shades” [7]. The reverse (overexploration in minority populations) can also cause concerns especially if conducted in a non-transparent fashion [5]. In these sequential settings, the assumption that data are i.i.d. is often violated. Online advertising, recommender systems, medical trials, image classification, loan decisions, criminal recidivism all require decisions to be made sequentially. The corresponding labels are not identical across time and can be affected by the economy, recent events, etc. Similarly labels are also not independent across rounds – if a bank offers a loan then this decision can affect whether the loanee or their environment will be able to repay future loans thereby affecting future labels as discussed by Liu et al. [32]. As a result, it is important to understand the effect of this adaptivity on non-discrimination. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. The classical way to model settings that are not i.i.d. is via adversarial online learning [30, 17], which poses the question: Given a class F of predictors, how can we make online predictions that perform as well as the best predictor from F in hindsight? The most basic online learning question (answered via the celebrated “multiplicative weights” algorithm) concerns competing with a finite set of predictors. The class F is typically referred to as “experts” and can be thought as “features” of the example where we want to make online predictions that compete with the best 1-sparse predictor. In this work, we wish to understand the interplay between adaptivity and non-discrimination and therefore consider the most basic extension of the classical online learning question: Given a class of individually non-discriminatory predictors, how can we combine them to perform as well as the best predictor, while preserving non-discrimination? The assumption that predictors are individually non-discriminatory is a strong assumption on the predictors and makes the task trivial in the batch setting where the algorithm is given labeled examples and wishes to perform well on unseen examples drawn from the same distribution. This happens because the algorithm can learn the best predictor from the labeled examples and then follow it (since this predictor is individually non-discriminatory, the algorithm does not exhibit discrimination). This enables us to understand the potential overhead that adaptivity is causing and significantly strengthens any impossibility result. Moreover, we can assume that predictors have been individually vetted to satisfy the non-discrimination desiderata – we therefore wish to understand how to efficiently compose these non-discriminatory predictors while preserving non-discrimination. 1.1 Our contribution Our impossibility results for equalized odds. Surprisingly, we show that for a prevalent notion of non-discrimination, equalized odds, it is impossible to preserve non-discrimination while also competing comparably to the best predictor in hindsight (no-regret property). Equalized odds, suggested by Hardt et al. [20] in the batch setting, restricts the set of allowed predictors requiring that, when examples come from different groups, the prediction is independent to the group conditioned on the label. In binary classification, this means that the false negative rate (fraction of positive examples predicted negative) is equal across groups and the same holds for the false positive rate (defined analogously). This notion was popularized by a recent debate on potential bias of machine learning risk tools for criminal recividism [1, 10, 28, 16]. Our impossibility results demonstrate that the order in which examples arrive significantly complicates the task of achieving desired efficiency while preserving non-discrimination with respect to equalized odds. In particular, we show that any algorithm agnostic to the group identity either cannot achieve performance comparable to the best predictor or exhibits discrimination in some instances (Theorem 1). This occurs in phenomenally simple settings with only two individually non-discriminatory predictors, two groups, and perfectly balanced instances: groups are of equal size and each receives equal number of positive and negative labels. The only imbalance exists in the order in which the labels arrive which is also relatively well behaved – labels are generated from two i.i.d. distributions, one in the first half of the instance and one in the second half. Although in many settings we cannot actively use the group identity of the examples due to legal reasons (e.g., in hiring), one may wonder whether these impossibility results disappear if we can actively use the group information to compensate for past mistakes. We show that this is also not the case (Theorem 2). Although our groups are not perfectly balanced, the construction is again very simple and consists only of two groups and two predictors: one always predicting positive and one always predicting negative. The simplicity of the settings, combined with the very strong assumption on the predictors being individually non-discriminatory speaks to the trade-off between adaptivity and non-discrimination with respect to equalized odds. Our results for equalized error rates. The strong impossibility results with respect to equalized odds invite the natural question of whether there exists some alternative fairness notion that, given access to non-discriminatory predictors, achieves efficiency while preserving non-discrimination. We answer the above positively by suggesting the notion of equalized error rates, which requires that the average expected loss (regardless whether it stems from false positives or false negatives) encountered by each group should be the same. This notion makes sense in settings where performance and fairness are measured with respect to the same objective. Consider a medical application where people from different subpopulations wish to receive appropriate treatment and any error in treatment costs equally both towards performance and towards fairness.1 It is morally objectionable to discriminate against one group, e.g. based on race, using it as experimentation to enhance the quality of service of the other, and it is reasonable to require that all subpopulations receive same quality of service. For this notion, we show that, if all predictors are individually non-discriminatory with respect to equalized error rates, running separate multiplicative weights algorithms, one for each subpopulation, preserves this non-discrimination without decay in the efficiency (Theorem 3). The key property we use is that the multiplicative weights algorithm guarantees to perform not only no worse than the best predictor in hindsight but also no better; this property holds for a broader class of algorithms [14]. Our result applies to general loss functions beyond binary predictions and only requires predictors to satisfy the weakened assumption of being approximately non-discriminatory. Finally, we examine whether the decisions of running separate algorithms and running this particular not so efficient algorithm were important for the result. We first give evidence that running separate algorithms is essential for the result; if we run a single instance of “multiplicative weights” or “follow the perturbed leader”, we cannot guarantee non-discrimination with respect to equalized error rates (Theorem 4). We then suggest that the property of not performing better than the best predictor is also crucial; in particular, better algorithms that satisfy the stronger guarantee of low shifting regret [21, 6, 34] are also not able to guarantee this non-discrimination (Theorem 5). These algorithms are considered superior to classical no-regret algorithms as they can better adapt to changes in the environment, which has nice implications in game-theoretic settings [35]. Our latter impossibility result is a first application where having these strong guarantees against changing benchmarks is not necessarily desired and therefore is of independent learning-theoretic interest. 1.2 Related work There is a large line of work on fairness and non-discrimination in machine learning (see [36, 8, 13, 41, 22, 20, 10, 28, 26] for a non-exhaustive list). We elaborate on works that either study group notions of fairness or fairness in online learning. The last decade has seen a lot of work on group notions of fairness, mostly in classification setting. Examples include notions that compare the percentage of members predicted positive such as demographic parity [8], disparate impact [15], equalized odds [20] and calibration across groups [10, 28]. There is no consensus on a universal fairness notion; rather the specific notion considered is largely task-specific. In fact, previous works identified that these notions are often not compatible to each other [10, 28], posed concerns that they may introduce unintentional discrimination [11], and suggested the need to go beyond such observational criteria via causal reasoning [27, 29]. Prior to our work, group fairness notions have been studied primarily in the batch learning setting with the goal of optimizing a loss function subject to a fairness constraint either in a post-hoc correction framework as proposed by Hardt et al. [20] or more directly during training from batch data [41, 19, 39, 40, 3] which requires care due to the predictors being discriminatory with respect to the particular metric of interest. The setting we focus on in this paper does not have the challenges of the above since all predictors are non-discriminatory; however, we obtain surprising impossibility results due to the ordering in which labels arrive. Recently fairness in online learning has also started receiving attention. One line of work focuses on imposing a particular fairness guarantee at all times for bandits and contextual bandits, either for individual fairness [22, 23] or for group fairness [9]. Another line of work points to counterintuitive externalities of using contextual bandit algorithms agnostic to the group identity and suggest that heterogeneity in data can replace the need for exploration [37, 24]. Moreover, following a seminal paper by Dwork et al. [13], a line of work aims to treat similar people similarly in online settings [33, 18]. Our work distinguishes itself from these directions mainly in the objective, since we require the non-discrimination to happen in the long-term instead of at any time; this extends the classical batch definitions of non-discrimination in the online setting. In particular, we only focus on situations where there are enough samples from each population of interest and we do not penalize the algorithm for a few wrong decisions, leading it to be overly pessimistic. Another difference is that previous work focuses either on individual notions of fairness or on i.i.d. inputs, while our work is about non-i.i.d. inputs in group notions of fairness. 1In contrast, in equalized odds, a misprediction only costs to the false negative metric if the label is positive. 2 Model Online learning protocol with group context. We consider the classical online learning setting of prediction with expert advice, where a learner needs to make sequential decisions for T rounds by combining the predictions of a finite set F of d hypotheses (also referred to as experts). We denote the outcome space by Y; in binary classification, this corresponds to Y = {+,−}. Additionally, we introduce a set of disjoint groups by G which identifies subsets of the population based on a protected attribute (such as gender, ethnicity, or income). The online learning protocol with group context proceeds in T rounds. Each round t is associated with a group context g(t) ∈ G and an outcome y(t) ∈ Y . We denote the resulting T -length timegroup-outcome sequence tuple by σ = {(t, g(t), y(t)) ∈ N× G × Y}Tt=1. This is a random variable that can depend on the randomness in the generation of the groups and the outcomes. We use the shorthand σ1:τ = {(t, g(t), y(t)) ∈ N× G × Y}τt=1 to denote the subsequence until round τ . The exact protocol for generating these sequences is described below. At round t = 1, 2, . . . , T : 1. An example with group context g(t) ∈ G arrives stochastically or is adversarially selected. 2. The learning algorithm or learner L commits to a probability distribution pt ∈ ∆(d) across experts where ptf denotes the probability that she follows the advice of expert f ∈ F at round t. This distribution pt can be a function of the sequence σ1:t−1. We call the learner group-unaware if she ignores the group context g(τ) for all τ ≤ t when selecting pt. 3. An adversary A then selects an outcome y(t) ∈ Y . The adversary is called adaptive if the groups/outcomes at round t = τ + 1 are a function of the realization of σ1:τ ; otherwise she is called oblivious. The adversary always has access to the learning algorithm, but an adaptive adversary additionally has access to the realized σ1:t−1 and hence also knows pt. Simultaneously, each expert f ∈ F makes a prediction ŷtf ∈ Ŷ , where Ŷ is a generic prediction space; for example, in binary classification, the prediction space could simply be the positive or negative labels: Ŷ = {+,−}, or the probabilistic score: Ŷ = [0, 1] with ŷtf interpreted as the probability the expert f ∈ F assigns to the positive label in round t, or even an uncalibrated score like the output of a support vector machine: Ŷ = R. Let ` : Ŷ ×Y → [0, 1] be the loss function between predictions and outcomes. This leads to a corresponding loss vector `t ∈ [0, 1]d where `tf = ` ( ŷtf , y(t) ) denotes the loss the learner incurs if she follows expert f ∈ F . 4. The learner then observes the entire loss vector `t (full information feedback) and incurs expected loss ∑ f∈F p t f ` t f . For classification, this feedback is obtained by observing y(t). In this paper, we consider a setting where all the experts f ∈ F are fair in isolation (formalized below). Regarding the group contexts, our main impossibility results (Theorems 1 and 2) assume that the group contexts g(t) arrive stochastically from a fixed distribution, while our positive result (Theorem 3) holds even when they are adversarially selected. For simplicity of notation, we assume throughout the presentation that the learner’s algorithm is producing the distribution pt of round t = τ + 1 deterministically based on σ1:τ and therefore all our expectations are taken only over σ which is the case in most algorithms. Our results extend when the algorithm uses extra randomness to select the distribution. Group fairness in online learning. We now define non-discrimination (or fairness) with respect to a particular evaluation metricM, e.g. in classification, the false negative rate metric (FNR) is the fraction of examples with positive outcome predicted negative incorrectly. For any realization of the time-group-outcome sequence σ and any group g ∈ G, metricM induces a subset of the population Sσg (M) that is relevant to it. For example, in classification, Sσg (FNR) = {t : g(t) = g, y(t) = +} is the set of positive examples of group g. The performance of expert f ∈ F on the subpopulation Sσg (M) is denoted byMσf (g) = 1|Sσg (M)| ∑ t∈Sσg (M) `tf . Definition 1. An expert f ∈ F is called fair in isolation with respect to metric M if, for every sequence σ, her performance with respect toM is the same across groups, i.e.Mσf (g) =Mσf (g′) for all g, g′ ∈ G. The learner’s performance on this subpopulation isMσL(g) = 1|Sσg (M)| ∑ t∈Sσg (M) ∑ f∈F p t f ` t f . To formalize our non-discrimination desiderata, we require the algorithm to have similar expected performance across groups, when given access to fair in isolation predictors. We make the following assumptions to avoid trivial impossibility results due to low-probability events or underrepresented populations. First, we take expectation over sequences generated by the adversary A (that has access to the learning algorithm L). Second, we require the relevant subpopulations to be, in expectation, large enough. Our positive results do not depend on either of these assumptions. More formally: Definition 2. Consider a set of experts F such that each expert is fair in isolation with respect to metricM. Learner L is called α-fair in composition with respect to metricM if, for all adversaries that produce Eσ[min(|Sσg (M)|, |Sσg′(M)|)] = Ω(T ) for all g, g′, it holds that: |Eσ[MσL(g)]− Eσ[MσL(g′)]| ≤ α. We note that, in many settings, we wish to have non-discrimination with respect to multiple metrics simultaneously. For instance, equalized odds requires fairness in composition both with respect to false negative rate and with respect to false positive rate (defined analogously). Since we provide an impossibility result for equalized odds, focusing on only one metric makes the result even stronger. Regret notions. The typical way to evaluate the performance of an algorithm in online learning is via the notion of regret. Regret is comparing the performance of the algorithm to the performance of the best expert in hindsight on the realized sequence σ as defined below: RegT = T∑ t=1 ∑ f∈F ptf ` t f − min f?∈F T∑ t=1 `tf? . In the above definition, regret is a random variable depending on the sequence σ; therefore depending on the randomness in groups/outcomes. An algorithm satisfies the no-regret property (or Hannan consistency) in our setting if for any losses realizable by the above protocol, the regret is sublinear in the time horizon T , i.e. RegT = o(T ). This property ensures that, as time goes by, the average regret vanishes. Many online learning algorithms, such as multiplicative weights updates satisfy this property with RegT = O( √ T log(d)). We focus on the notion of approximate regret, which is a relaxation of regret that gives a small multiplicative slack to the algorithm. More formally, -approximate regret with respect to expert f? ∈ F is defined as: ApxReg ,T (f ?) = T∑ t=1 ∑ f∈F ptf ` t f − (1 + ) T∑ t=1 `tf? . We note that typical algorithms guarantee ApxReg ,T (f ?) = O(ln(d)/ ) simultaneously for all experts f? ∈ F . When the time-horizon is known in advance, by setting = √ ln(d)/T , such a bound implies the aforementioned regret guarantee. In the case when the time horizon is not known, one can also obtain a similar guarantee by adjusting the learning rate of the algorithm appropriately. Our goal is to develop online learning algorithms that combine fair in isolation experts in order to achieve both vanishing average expected -approximate regret, i.e. for any fixed > 0 and f? ∈ F , Eσ[ApxReg ,T (f?)] = o(T ), and also non-discrimination with respect to fairness metrics of interest. 3 Impossibility results for equalized odds In this section, we study a popular group fairness notion, equalized odds, in the context of online learning. A natural extension of equalized odds for online settings would require that the false negative rate, i.e. percentage of positive examples predicted incorrectly, is the same across all groups and the same also holds for the false positive rate. We assume that our experts are fair in isolation with respect to both false negative as well as false positive rate. A weaker notion of equalized odds is equality of opportunity where the non-discrimination condition is required to be satisfied only for the false negative rate. We first study whether it is possible to achieve the vanishing regret property while guaranteeing α-fairness in composition with respect to false negative rate for arbitrarily small α. When the input is i.i.d., this is trivial as we can learn the best expert in O(log d) rounds and then follow its advice; since the expert is fair in isolation, this will guarantee vanishing non-discrimination. In contrast, we show that, in a non-i.i.d. online setting, this goal is unachievable. We demonstrate this in phenomenally benign settings where there are just two groups G = {A,B} that come from a fixed distribution and just two experts that are fair in isolation (with respect to false negative rate) even per round – not only ex post. Our first construction (Theorem 1) shows that any no-regret learning algorithm that is group-unaware cannot guarantee fairness in composition, even in instances that are perfectly balanced (each pair of label and group gets 1/4 of the examples) – the only adversarial component is the order in which these examples arrive. This is surprising because such a task is straightforward in the stochastic setting as all hypotheses are non-discriminatory. We then study whether actively using the group identity can correct the aforementioned similarly to how it enables correction against discriminatory predictors [20]. The answer is negative even in this scenario (Theorem 2): if the population is sufficiently not balanced, any no-regret learning algorithm will be unfair in composition with respect to false negative rate even if they are not group-unaware. Group-unaware algorithms. We first present the impossibility result for group-unaware algorithms. In our construction, the adversary is oblivious, there is perfect balance in groups (half of the population corresponds to each group), and perfect balance within group (half of the labels of each group are positive and half negative). Theorem 1. For all α < 3/8, there exists > 0 such that any group-unaware algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F is α-unfair in composition with respect to false negative rate even for perfectly balanced sequences. Proof sketch. Consider an instance that consists of two groups G = {A,B}, two experts F = {hn, hu}, and two phases: Phase I and Phase II. Group A is the group we end up discriminating against while group B is boosted by the discrimination with respect to false negative rate. At each round t the groups arrive stochastically with probability 1/2 each, independent of σ1:t−1. The experts output a score value in Ŷ = [0, 1], where score ŷtf ∈ Ŷ can be interpreted as the probability that expert f assigns to label being positive in round t, i.e. y(t) = +. The loss function is the expected probability of error given by `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The two experts are very simple: hn always predicts negative, i.e. ŷthn = 0 for all t, and hu is an unbiased expert who, irrespective of the group or the label, makes an inaccurate prediction with probability β = 1/4 + √ , i.e. ŷthu = β · 1{y(t) = −}+ (1− β) · 1{y(t) = +} for all t. Both experts are fair in isolation with respect to both false negative and false positive rates: FNR is 100% for hn and β for hu regardless the group, and FPR is 0% for hn and β for hu, again independent of the group. The instance proceeds in two phases: 1. Phase I lasts for T/2 rounds. The adversary assigns negative labels on examples with group context B and assigns a label uniformly at random to examples from group A. 2. In Phase II, there are two plausible worlds: (a) if the expected probability the algorithm assigns to expert hu in Phase I is at least Eσ [∑T/2 t=1 p t hu ] > √ · T then the adversary assigns negative labels for both groups (b) else the adversary assigns positive labels to examples with group context B while examples from group A keep receiving positive and negative labels with probability equal to half. We will show that for any algorithm with vanishing approximate regret property, i.e. with ApxReg ,T (f) = o(T ), the condition for the first world is never triggered and hence the above sequence is indeed balanced. We now show why this instance is unfair in composition with respect to false negative rate. The proof involves showing the following two claims, whose proofs we defer to the supplementary material. 1. In Phase I, any -approximate regret algorithm needs to select the negative expert hn most of the times to ensure small approximate regret with respect to hn. This means that, in Phase I (where we encounter half of the positive examples from group A and none from group B), the false negative rate of the algorithm is close to 1. 2. In Phase II, any -approximate regret algorithm should quickly catch up to ensure small approximate regret with respect to hu and hence the false negative rate of the algorithm is closer to β. Since the algorithm is group-unaware, this creates a mismatch between the false negative rate of B (that only receives false negatives in this phase) and A (that has also received many false negatives before). Group-aware algorithms. We now turn our attention to group-aware algorithms, that can use the group context of the example to select the probability of each expert and provide a similar impossibility result. There are three changes compared to the impossibility result we provided for group-unaware algorithms. First, the adversary is not oblivious but instead is adaptive. Second, we do not have perfect balance across populations but instead require that the minority population arrives with probability b < 0.49, while the majority population arrives with probability 1− b. Third, the labels are not equally distributed across positive and negative for each population but instead positive labels for one group are at least a c percentage of the total examples of the group for a small c > 0. Although the upper bounds on b and c are not optimized, our impossibility result cannot extend to b = c = 1/2. Understanding whether one can achieve fairness in composition for some values of b and c is an interesting open question. Our impossibility guarantee is the following: Theorem 2. For any group imbalance b < 0.49 and 0 < α < 0.49−0.99b1−b , there exists 0 > 0 such that for all 0 < < 0 any algorithm that satisfies Eσ [ ApxReg ,T (f) ] = o(T ) for all f ∈ F , is α-unfair in composition. Proof sketch. The instance has two groups: G = {A,B}. Examples with group context A are discriminated against and arrive randomly with probability b < 1/2 while examples with group context B are boosted by the discrimination and arrive with the remaining probability 1− b. There are again two experts F = {hn, hp}, which output score values in Ŷ = [0, 1], where ŷtf can be interpreted as the probability that expert f assigns to label being + in round t. We use the earlier loss function of `(ŷ, y) = ŷ · 1{y = −}+ (1− ŷ) · 1{y = +}. The first expert hn is again pessimistic and always predicts negative, i.e. ŷthn = 0, while the other expert hp is optimistic and always predicts positive, i.e. ŷthp = 1. These satisfy fairness in isolation with respect to equalized odds (false negative rate and false positive rate). Let c = 1/1012 denote the percentage of the input that is about positive examples for A, ensuring that |Sσg (FNR)| = Ω(T ). The instance proceeds in two phases. 1. Phase I lasts Θ · T rounds for Θ = 101c. The adversary assigns negative labels on examples with group context B. For examples with group context A, the adversary acts as following: • if the algorithm assigns probability on the negative expert below γ( ) = 99−2 100 , i.e. pthn(σ 1:t−1) < γ( ), then the adversary assigns negative label. • otherwise, the adversary assigns positive labels. 2. In Phase II, there are two plausible worlds: (a) the adversary assigns negative labels to both groups if the expected number of times that the algorithm selected the negative expert with probability higher than γ( ) on members of groupA is less than c·b·T , i.e. Eσ [ 1 { t ≤ Θ · T : g(t) = A, pthn ≥ γ( ) }] < c·b·T . (b) otherwise she assigns positive labels to examples with group context B and negative labels to examples with group context A. Note that, as before, the condition for the first world will never be triggered by any no-regret learning algorithm (we elaborate on that below) which ensures that Eσ |SσA(FNR)| ≥ c·b·T . The proof is based on the following claims, whose proofs are deferred to the supplementary material. 1. In Phase I, any vanishing approximate regret algorithm enters the second world of Phase II. 2. This implies a lower bound on the false negative rate on A, i.e. FNR(A) ≥ γ( ) = 99−2 100 . 3. In Phase II, any -approximate regret algorithm assigns large enough probability to expert hp for group B, implying an upper bound on the false negative rate on B, i.e. FNR(B) ≤ 1/2(1−b). Therefore this provides a gap in the false negative rates of at least α. 4 Fairness in composition with respect to an alternative metric The negative results of the previous section give rise to a natural question of whether fairness in composition can be achieved for some other fairness metric in an online setting. We answer this question positively by suggesting the equalized error rates metric EER which captures the average loss over the total number of examples (independent of whether this loss comes from false negative or false positive examples). The relevant subset induced by this metric Sσg (EER) is the set of all examples coming from group g ∈ G. We again assume that experts are fair in isolation with respect to equalized error rate and show that a simple scheme where we run separately one instance of multiplicative weights for each group achieves fairness in composition (Theorem 3). The result holds for general loss functions (beyond pure classification) and is robust to the experts only being approximately fair in isolation. A crucial property we use is that multiplicative weights not only does not perform worse than the best expert; it also does not perform better. In fact, this property holds more generally by online learning algorithms with optimal regret guarantees [14]. Interestingly, not all algorithms can achieve fairness in composition even with respect to this refined notion. We provide two algorithm classes where this is unachievable. First, we show that any algorithm (subject to a technical condition satisfied by algorithms such as multiplicative weights and follow the perturbed leader) that ignores the group identity can be unboundedly unfair with respect to equalized error rates (Theorem 4). This suggests that the algorithm needs to actively discriminate based on the groups to achieve fairness with respect to equalized error rates. Second, we show a similar negative statement for any algorithm that satisfies the more involved guarantee of small shifting regret, therefore outperforming the best expert (Theorem 5). This suggests that the algorithm used should be good but not too good. This result is, to the best of our knowledge, a first application where shifting regret may not be desirable which may be of independent interest. The positive result. We run separate instances of multiplicative weights with a fixed learning rate η, one for each group. More formally, for each pair of expert f ∈ F and group g ∈ G, we initialize weights w1f,g = 1. At round t = {1, 2, . . . , T}, an example with group context g(t) arrives and the learner selects a probability distribution based to the corresponding weights: ptf = wtf,g(t)∑ j∈F w t j,g(t) . Then the weights corresponding to group g(t) are updated exponentially: wt+1f,g = w t f,g ·(1−η) `tf ·1{g(t)=g}. Theorem 3. For any α > 0 and any < α such that running separate instances of multiplicative weights for each group with learning rate η = min( , α/6) guarantees α-fairness in composition and -approximate regret of at most O(|G| log(d)/ ). Proof sketch. The proof is based on the property that multiplicative weights performs not only no worse than the best expert in hindsight but also no better. Therefore the average performance of multiplicative weights at each group is approximately equal to the average performance of the best expert in that group. Since the experts are fair in isolation, the average performance of the best expert in all groups is the same which guarantees the equalized error rates desideratum. We make these arguments formal in the supplementary material. Remark 1. If the instance is instead only approximately fair in isolation with respect to equalized error rates, i.e. the error rates of the two experts are not exactly equal but within some constant κ, the same analysis implies (α+ κ)-fairness in composition with respect to equalized error rates. Impossibility results for group-unaware algorithms. In the previous algorithm, it was crucial that the examples of the one group do not interfere with the decisions of the algorithm on the other group. We show that, had we run one multiplicative weights algorithm in a group-unaware way, we would not have accomplished fairness in composition. In fact, this impossibility result holds for any algorithm with vanishing -approximate regret where the learning dynamic (pt at each round t) is a deterministic function of the difference between the cumumative losses of the experts (without taking into consideration their identity). This is satisfied, for instance by multiplicative weights and follow the perturbed leader with a constant learning rate. Unlike the previous section, the impossibility results for equalized error rates require groups to arrive adversarially (which also occurs in the above positive result). The proof of the following theorem is provided in the supplementary material. Theorem 4. For any α > 0 and for any > 0, running a single algorithm from the above class in a group-unaware way is α-unfair in composition with respect to equalized error rate. Impossibility results for shifting algorithms. The reader may be also wondering whether it suffices to just run separate learning algorithms in the two groups or whether multiplicative weights has a special property. In the following theorem, we show that the latter is the case. In particular, multiplicative weights has the property of not doing better than the best expert in hindsight. The main representative of algorithms that do not have such a property are the algorithms that achieve low approximate regret compared to a shifting benchmark (tracking the best expert). More formally, approximate regret against a shifting comparator f? = (f?(1), . . . , f?(T )) is defined as: ApxReg ,T (f ?) = ∑ t ptf ` t f − (1 + ) ∑ t `tf?(t), and typical guarantees are E[ApxReg(f?)] = O(K(f?)·ln(dT )/ ) where K(f?) = ∑T t=2 1{f?(t) 6= f?(t− 1)} is the number of switches in the comparator. We show that any algorithm that can achieve such a guarantee even when K(f?) = 2 does not satisfy fairness in composition with respect to equalized error rate. This indicates that, for the fairness with equalized error rates purpose, the algorithm not being too good is essential. This is established in the following theorem whose proof is deferred to the supplementary material. Theorem 5. For any α < 1/2 and > 0, any algorithm that can achieve the vanishing approximate regret property against shifting comparators f of length K(f) = 2, running separate instances of the algorithm for each group is α-unfair in composition with respect to equalized error rate. 5 Discussion In this paper, we introduce the study of avoiding discrimination towards protected groups in online settings with non-i.i.d. examples. Our impossibility results for equalized odds consist of only two phases, which highlights the challenge in correcting for historical biases in online decision making. Our work also opens up a quest towards definitions that are relevant and tractable in non-i.i.d. online settings for specific tasks. We introduce the notion of equalized error rates that can be a useful metric for non-discrimination in settings where all examples that contribute towards the performance also contribute towards fairness. This is the case in settings that all mistakes are similarly costly as is the case in speech recognition, recommender systems, or online advertising. However, we do not claim that its applicability is universal. For instance, consider college admission with two perfectly balanced groups that correspond to ethnicity (equal size of the two groups and equal number of positive and negatives within any group). A racist program organizer can select to admit all students of the one group and decline the students of the other, while satisfying equalized error rates – this does not satisfy equalized odds. Given the impossibility result we established for equalized odds, it is interesting to identify definitions that work well for different tasks one encounters in online non-i.i.d. settings. Moreover, although our positive results extend to the case where predictors are vetted to be approximately non-discriminatory, they do not say anything about the case where the predictors do not satisfy this property. We therefore view our work only as a first step towards understanding non-discrimination in non-i.i.d. online settings. Acknowledgements The authors would like to thank Manish Raghavan for useful discussions that improved the presentation of the paper. This work was supported by the NSF grants CCF-1800317 and CCF-1563714, as well as a Google Ph.D. Fellowship.
1. What is the focus of the paper regarding fairness in online learning? 2. What are the strong points of the paper, particularly in terms of its writing style and explanation of ideas? 3. What are the weak points of the paper, especially regarding its lack of real-world experiments and explanations of certain applications? 4. How does the reviewer assess the overall quality of the paper, considering both its strengths and weaknesses?
Review
Review The authors consider the setting of fairness in online learning. It is one of the initial works in the domain. The ideas used are not very novel but it is interesting as it is one of the first works in the domain. The authors prove an impossibility result to achieve fairness with equal false positive metric but show a positive result with equal expected error metric. Strong Points: [S1] The paper is well written and easy to follow [S2] The authors have done a good job to explain the ideas with simple examples Weak Points [W1] I think the motivation behind equal error rate applications (speecch processing, recommender systems) is not well explained. I would have liked one case study or a small motivational example to better convey the intuitions. [W2] There are no experiments to show the validity of results on real world datasets. It will help us better understand the advantages and disadvantages of the two metrics considered in the paper. Overall, I like the work. It does not involve many fancy results but presents all the results in a simple manner which are easy to follow. I am disappointed to not see any real world experiments though and hence rate it marginally above the bar (explanation in W2).
NIPS
Title Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification Abstract The interdependence between nodes in graphs is key to improve class predictions on nodes and utilized in approaches like Label Propagation (LP) or in Graph Neural Networks (GNNs). Nonetheless, uncertainty estimation for non-independent node-level predictions is under-explored. In this work, we explore uncertainty quantification for node classification in three ways: (1) We derive three axioms explicitly characterizing the expected predictive uncertainty behavior in homophilic attributed graphs. (2) We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on interdependent nodes. GPN provably obeys the proposed axioms. (3) We extensively evaluate GPN and a strong set of baselines on semi-supervised node classification including detection of anomalous features, and detection of left-out classes. GPN outperforms existing approaches for uncertainty estimation in the experiments. 1 Introduction Accurate and rigorous uncertainty estimation is key for reliable machine learning models in safetycritical domains [67]. It quantifies the confidence of machine learning models, thus allowing them to validate knowledgeable predictions or flag predictions on unknown input domains. Uncertainty is commonly divided in aleatoric and epistemic uncertainty [28]. The aleatoric uncertainty accounts for irreducible uncertainty (e.g., due to inherent sensor noise). The epistemic uncertainty accounts for a lack of information for accurate prediction (e.g., test data significantly different from training data). Traditionally, machine learning models assume i.i.d. inputs, thus performing predictions based on input features only. For uncertainty estimation on i.i.d. inputs, a large class of definitions, models and evaluation methods have been introduced [28, 62, 3, 78, 50]. Further, uncertainty estimation has been successfully applied to different tasks e.g. out-of-distribution (OOD) or shift detection [78], active learning [75, 55], continual learning [4] or reinforcement learning [18]. In contrast, uncertainty estimation on interdependent nodes is more complex than on i.i.d. inputs and under-explored [3]. A node in an attributed graph is characterized by two types of information: its features and its neighborhood. While the feature information indicates the node position in the feature space – similarly to i.i.d. inputs –, the neighborhood information indicates the additional node position in the network space. To leverage the neighborhood information, recent graph neural networks (GNNs) successfully proposed to enrich and correct the possibly noisy information of the features of a single node by aggregating them with the features of its neighborhood [46, 92, 48]. It naturally leads to the distinction between predictions without network effects based exclusively on their own node feature representation, and predictions with network effects based on neighborhood ∗equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). aggregation. The aggregation step commonly assumes network homophily which states that nodes with similar properties tend to connect to each other more densely, thus violating the i.i.d. assumption between node features given their neighborhood. The core motivation of our work is to transfer some of the existing uncertainty estimation definitions, models and evaluations from i.i.d. inputs to interdependent node inputs by leveraging both the feature and the neighborhood information. In particular, we aim at an accurate quantification of the aleatoric and epistemic uncertainty without and with network effect under network homophily (see Fig. 1). Our contribution. In this work, we consider uncertainty estimation on semi-supervised node classification. First, we derive three axioms which materialize reasonable uncertainty for non-independent inputs. These axioms cover the traditional notions of aleatoric and epistemic uncertainty and distinguish between the uncertainty with and without network effects. Second, we propose Graph Posterior Network (GPN)2 for uncertainty estimation for node classification and prove formally that it follows the axiom requirements contrary to popular GNNs. Third, we build an extensive evaluation setup for uncertainty estimation which relies on the assessment of uncertainty estimation quality of OOD detection and robustness against shifts of the attributed graph properties. Both OOD data and attributed graph shifts distinguish between attribute and structure anomalies. The theoretical properties of GPN manifest in these experiments where it outperforms all other baselines on uncertainty evaluation. 2 Related Work In this section, we cover the related work for predictive uncertainty estimation for i.i.d. inputs and for graphs. To this end, we review the commonly accepted axioms defining the desired uncertainty estimation under different circumstances, the methods capable of consistent uncertainty quantification and the evaluation validating the quality of the uncertainty estimates in practice. Uncertainty for i.i.d. inputs – The related work for uncertainty quantification on i.i.d. inputs is rich as for example shown in a recent survey [3]. Axioms: Far from ID data, the predicted uncertainty is expected to be high [66, 15, 51, 30]. Close to ID data, the desired uncertainty is more complicated. Indeed, while some works expected models to be robust to small dataset shifts [78, 89], other works expected to detect near OOD classes based on uncertainty [98, 50, 13]. Methods: Many methods already exist for uncertainty quantification for i.i.d. inputs like images or tabular data. A first family of models quantifies uncertainty by aggregating statistics (e.g. mean, variance or entropy) from sub-networks with different weights. Important examples are ensemble [52, 96, 97, 38], dropout [88] or Bayesian Neural Networks (BNN) [9, 20, 59, 24, 21]. Most of these approaches require multiple forward-passes for uncertainty quantification. Further, dropout and BNN may have other pitfalls regarding their limited applicability to more complex tasks [77, 41, 34, 27]. A second family quantifies uncertainty by using the logit information. Important examples are temperature scaling which rescale the logits after training [35, 56] and energy-based models which interpret the logits as energy scores [57, 33]. A third family of model quantifies uncertainty based on deep Gaussian Processes (GP). Important examples use GP at activation-level [68] or at (last) layer-level [53, 51, 91, 8]. Finally, a last 2Project page including code at https://www.daml.in.tum.de/graph-postnet family of models quantifies uncertainty by directly parameterizing a conjugate prior distribution over the target variable. Important examples explicitly parameterize prior distributions [86, 63, 60, 61, 6] or posterior distributions [14, 15]. Methods based on GP and conjugate prior usually have the advantage of deterministic and fast inference. Evaluation: Previous works have already proposed empirical evaluation of uncertainty estimation by looking at accuracy, calibration or OOD detection metrics under dataset shifts or adversarial perturbations for i.i.d. inputs [78, 50]. In contrast with all these approaches, this work studies uncertainty quantification for classification of interdependent nodes. Uncertainty for graphs – Notably, the recent survey [3] points out that there is only a limited number of studies on uncertainty quantification on GNN and semi-supervised learning. Moreover, they recommend proposing new methods. Axioms: To the best of our knowledge, only [23] proposed explicit axioms for node classification for non-attributed graphs. They expect disconnected nodes to recover prior predictions and nodes with higher beliefs to be more convincing. In this work, we clarify the desired uncertainty estimation for node classification on attributed graphs based on motivated and explicit axioms. Methods: The largest family of models for uncertainty for graphs are dropoutor Bayesian-based methods. Important examples propose to drop or assign probabilities to edges [83, 16, 37, 19, 42]. Further works proposed to combine the uncertainty on the graph structure with uncertainty on the transformation weights similarly to BNN [22, 101, 79, 80]. Importantly, these models do not directly quantify uncertainty on the prediction. Similarly to the i.i.d. case, a second family of models focuses on deterministic uncertainty quantification. Important examples mostly use Graph Gaussian Processes, which do not easily scale to large graphs [74, 103, 58, 12]. Only [102] explicitly parameterized a Dirichlet conjugate prior. They combined it with multiple components (Graph-Based Kernel, dropout, Teacher Network, loss regularizations) which cannot easily distinguish between uncertainty without and with network effects. In contrast, GPN is a simple approach based on conjugate prior parametrization and disentangles uncertainty with and without network effects. Evaluation: The evaluation of most of those methods was not focused on the quality of the uncertainty estimates but on the target task metrics (e.g. accuracy for classification, distance to ground truth for regression). Some methods focus on robustness of the target task metrics against adversarial perturbations [36, 107, 106]. Other methods only relied on uncertainty quantification to build more robust models [104, 25]. For node classification, only few works evaluated uncertainty by using Left-Out classes or detection of missclassified samples [102], active learning [74] or visualization [12]. Note that proposed uncertainty evaluations on molecules at graph level [100, 84, 5, 40, 90] is an orthogonal problem. In this work, we propose a sound and extensive evaluation for uncertainty in node classification. It distinguishes between OOD nodes w.r.t. features and structure, and graph dataset shifts w.r.t. the percentage of perturbed node features and the percentage of perturbed edges. 3 Uncertainty Quantification for Node Classification We consider the task of (semi-supervised) node classification on an attributed graph G = (A,X) with adjacency matrixA ∈ {0, 1}N×N and node attribute matrixX ∈ RN×D. We aim at inferring the labels y(v) ∈ {1, ..., C} plus the the aleatoric uncertainty u(v)alea and the epistemic uncertainty u (v) epist of unlabeled nodes v ∈ T given a set of labelled nodes u ∈ U in the graph where V = T ∪ U denotes the set of vertices. 3.1 Axioms Uncertainty estimation in the setting of interdependent inputs is not well-studied. It often leaves the expected behavior and interpretations for uncertainty estimation unclear. Thus, we need wellgrounded axioms to derive meaningful models. In this section, we aim at specifying the desired uncertainty predictions under various circumstances in homophilic attributed graphs. To this end, we propose three axioms which are based on the two following distinctions. The first distinction differentiates between aleatoric and epistemic uncertainty which are commonly used concepts under the i.i.d. assumptions [28, 62]. The second distinction differentiates between uncertainty without and with network effects which are motivated by the concepts of attribute and structure anomalies used in the attributed graph setting [11]. These new axioms cover all possible combinations encountered by these distinctions and extend the axioms proposed by [23] for non-attributed graphs. We designed the axioms to be informal and generic so that they are application independent, model-agnostic and do not require complex mathematical notations similarly to [23, 76]. In practice, formal definitions need to instantiate general concepts like aleatoric/epistemic uncertainty and with/without network effects noting that some definitions might be more convenient depending on the task. The first axiom deals with (epistemic and aleatoric) uncertainty estimation without network effects (see Fig. 1a, 1c). : Axiom 3.1. A node’s prediction in the absence of network effects should only depend on its own features. A node with features more different from training features should be assigned higher uncertainty. Axiom 3.1 states that if a node v has no neighbors, then the final prediction p(v) should only depend on its own node features x(v). Further, for anomalous features the model should fall back to safe prior predictions, indicating high aleatoric and epistemic uncertainty. This aligns with [23] which expects to recover prior predictions for non-attributed nodes without network effect, and [66, 15] which expect to recover prior predictions far from training data for i.i.d. inputs. The second axiom deals with epistemic uncertainty estimation with network effects (see Fig. 1c, 1d): Axiom 3.2. All else being equal, if a node’s prediction in the absence of network effects is more epistemically certain, then its neighbors’ predictions in the presence of network effects should become more epistemically certain. Axiom 3.2 states that a node v with confident feature predictions x(v) is expected to be convincing and make its neighbors u ∈ N (v) more confident. Conversely, a node with anomalous features is expected to make its neighborhood less confident. This axiom materializes the network homophily assumption at the epistemic level i.e. connected nodes have similar epistemic uncertainty estimates. For non-attributed graphs, [23] similarly expects a more confident node to have more influence on a direct neighbor. The third axiom deals with aleatoric uncertainty estimation with network effects (see Fig. 1a, 1b): Axiom 3.3. All else being equal, a node’s prediction in the presence of network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence of network effects have high aleatoric uncertainty. Further, a node prediction in the presence network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence network effects are more conflicting. Axiom 3.3 states that no clear classification decision should be made for a node v if no clear classification decisions can be made for its neighbors. Further, the classification decision becomes less certain if a neighbor has a conflicting classification decision. Note that this axiom is more subtle than the direct application of network homophily at the aleatoric level. Indeed a node can have a high aleatoric uncertainty contrary to its neighbors which predict different classes with low aleatoric uncertainty. This aligns with the intuition that conflicting information from the neighborhood provides an irreducible uncertainty to the considered node. 3.2 Graph Posterior Network The Bayesian update rule is a key component of GPN to model uncertainty on the predicted categorical distribution. For a single categorical distribution y ∼ Cat(p), the standard Bayesian update is straightforward. A natural choice for a prior distribution over the parameters p is its conjugate prior i.e. the Dirichlet distribution P(p) = Dir(αprior) with αpriorc ∈ RC+. Given the observations y(1), ..., y(N), the Bayesian update then consists in applying the Bayes’ theorem P ( p | {y(j)}Nj=1 ) ∝ P ( {y(j)}Nj=1 |p ) × P(p) (1) producing the posterior distribution P(p | {y(j)}Nj=1) = Dir(αpost) where αpost = αprior + β are the parameters of the posterior and βc = ∑ j = 1y(j)=c are the class counts. This framework naturally disentangles the aleatoric and epistemic uncertainty by defining the Dirichlet mean p̄ = αα0 and the total evidence count α0 = ∑ c αc. Indeed, the aleatoric uncertainty is commonly measured by the entropy of the categorical distribution i.e. ualea = H [Cat(p̄)] [62, 14, 15] and the epistemic uncertainty can be measured by the total evidence count α0 of observations i.e. uepist = −α0 [14, 15]. Alternatively, the epistemic uncertainty can also be measured with the Dirichlet differential entropy [62]. Note that the reparameterization using p̄ and α0 can apply to any class counts including the prior counts αprior, the class counts β and the posterior counts αpost. For classification, the predicted categorical distribution ŷ(v) ∼ Cat(p(v)) additionally depends on the specific input v. Hence, the input-dependent Bayesian rule [14, 15] extends the Bayesian treatment of a single categorical distribution to classification by predicting an individual posterior update for any possible input. Specifically, it first introduces a fixed Dirichlet prior over the categorical distribution p(v) ∼ Dir(αprior) where αprior ∈ RC+ is usually set to 1, and second predicts the input-dependent update β(v) which forms the posterior distribution p(v) ∼ Dir(αpost,(v)) where the posterior parameters are equal to αpost,(v) = αprior + β(v). (2) The variable β(v) can be interpreted as learned class pseudo-counts and its parametrization is crucial. For i.i.d. inputs, PostNet [14] models the pseudo-counts β(v) in two main steps. (1) it maps the inputs features x(v) onto a low-dimensional latent vector z(v) = fθ(x(v)) ∈ RH . (2), it fits one conditional probability density P(z(v)|c;φ) per class on this latent space with normalizing flows. The final pseudo count for class c is set proportional to its respective conditional density i.e. β (v) c = N P(z(v)|c;φ)P(c) where N is a total certainty budget and P(c) = 1C for balanced classes. Note that this implies α(v)0 = N P(z(v)|φ). This architecture has the advantage of decreasing the evidence outside the known distribution when increasing the evidence inside the known distribution, thus leading to consistent uncertainty estimation far from training data. Bayesian Update for Interdependent Inputs. We propose a simple yet efficient modification for parameterizing β(v)c to extend the input-dependent Bayesian update for interdependent attributed nodes. The core idea is to first predict the feature class pseudo-counts βft,(v) based on independent node features only, and then diffuse them to form the aggregated class pseudo-counts βagg,(v) based on neighborhood features. Hence, the feature class pseudo-counts βft,(v) intuitively act as uncertainty estimates without network effects while the aggregated class pseudo-counts βagg,(v) intuitively act as uncertainty estimates with network effects. To this end, GPN performs three main steps (see Fig. 2). (1) A (feature) encoder maps the features of v onto a low-dimensional latent representation z i.e. z(v) = fθ(x(v)) ∈ RH . In practice, we use a simple MLP encoder in our experiments similarly to APPNP [48]. (2) One conditional probability density per class P(z(v) | c;φ) is used to compute βft,(v)c i.e βft,(v)c ∝ P(z(v) | c;φ). Note that the the total feature evidence αft,(v)0 = ∑ c β ft,(v) c and the parameter p̄ft,(v) = β ft,(v) /αft,(v)0 are only based on node features and can be seen as epistemic and aleatoric uncertainty measures without network effects. In practice, we used radial normalizing flows for density estimation similarly to [14] and scaled the certainty N budget w.r.t. the latent dimension H similarly to [15]. (3) A Personalized Page Rank (PPR) message passing scheme is used to diffuse the feature class pseudo-counts βft,(v)c and form the aggregated class pseudo-counts βagg,(v)c i.e. βagg,(v)c = ∑ u∈V Πpprv,uβ ft,(u) c (3) where Πpprv,u are the dense PPR scores implicitly reflecting the importance of node u on v. We approximate the dense PPR scores using power iteration similarly to [48]. The aggregated pseudo-count β agg,(v) c is then used in the input-dependent Bayesian update (see Eq. 2). Remark that the scores Πpprv,u define a valid conditional distribution over all nodes associated to the PPR random walk (i.e.∑ u Π ppr v,u = 1). It can be viewed as a soft neighborhood for v accounting for all neighborhood hops through infinitely many message passing steps [48]. Hence, on one hand, the PPR scores define a probability distribution over nodes using the node edges only. On the other hand, the quantity P(z(u) | c;φ) defines a probability distribution over nodes using the node features only. Therefore, we can equiv- alently rewrite this step using probabilistic notations P(v |u) = Πpprv,u and P(u | c) = P(z(u) | c;φ): βagg,(v)c ∝ P̄(v | c) = ∑ u∈V P(v |u)P(u | c) (4) Interestingly, the quantity P̄(v | c) defines a valid distribution which normalizes over all node features and accounts for the soft neighborhood (i.e. ∫ ... ∫ P̄(v | c)dz(u1)...dz(u|V|) = 1). Hence, the message passing step is a simple but efficient method to transform the feature distributions of a single node into a joint distributions over the soft neighborhood features. Finally, the evidence α agg,(v) 0 = ∑ c β agg,(v) c and the parameter pagg,(v) = β agg,(v) /αagg,(v)0 are based on neighborhood features and can be seen as epistemic and aleatoric uncertainty measures with network effects. Remark that, the sequential processing of the features (i.e. steps (1)+(2)) and network information (i.e. step (3)) in GPN is a key element to differentiate between the uncertainty without and with network effects and is a building block to provably obey the axioms. GPN extends both APPNP [48] and PostNet [14] approaches. The key difference to APPNP is the density estimation modeling the epistemic uncertainty (i.e. steps (1)+(2)) and the input-dependent Bayesian update allowing to recover the prior prediction (i.e. Eq. 2). The key difference to PostNet is the PPR diffusion which accounts for dependence between nodes (step (3)). Optimization. We follow [14] and train GPN by minimizing the following Bayesian loss with two terms i.e.: L(v) = −Ep(v)∼Qpost,(v) [ logP(y(v) |p(v)) ] − λH [ Qpost,(v) ] (5) where λ is a regularization factor. It can be computed quickly in closed-form and provides theoretical guarantees for optimal solutions [14]. All parameters of GPN are trained jointly. Similarly to [15], we also observed that "warm-up" training for the normalizing flows is helpful. 3.3 Uncertainty Estimation Guarantees In this section, we provide theoretical guarantees showing that GPN fulfills the three axioms under mild assumptions given the specific definitions of concepts of aleatoric/epistemic uncertainty and with/without network effects presented in Sec. 3.2. Throughout this section, we consider a GPN model parameterized with a (feature) encoder fφ with piecewise ReLU activations, a PPR diffusion, and a density estimator P(zft,(v) |ω) with bounded derivatives. We present detailed proofs in appendix. The first theorem shows that GPN follows Ax. 3.1 and guarantees that GPN achieves reasonable uncertainty estimation on extreme node features without network effects: Theorem 1. Lets consider a GPN model. Let fφ(x(v)) = V (l)x(v) + a(l) be the piecewise affine representation of the ReLU network fφ on the finite number of affine regions Q(l) [7]. Suppose that V (l) have independent rows, then for any node v and almost anyx(v) we have P(fφ(δ · x(v)) | c;φ) → δ→∞ 0. Without network effects, it implies that βft,(v)c = β agg,(v) c → δ→∞ 0. The proof relies on two main points: the equivalence of the GPN and PostNet architectures without network effects, and the uncertainty guarantees of PostNet far from training data similarly to [15]. It intuitively states that, without network effects, GPN predict small evidence (i.e. βagg,(v) ≈ 0) far from training features (i.e. ||δ · x(v)|| → ∞) and thus recover the prior prediction (i.e. αpost,(v) ≈ αprior). Note that contrary to GPN, methods which do not account for node features (e.g. Label Propagation) or methods which only use ReLU activations [39] cannot validate Ax. 3.1. Further, methods which perform aggregation steps in early layers (e.g. GCN [46]) do not separate the processing of the feature and network information making unclear if they fulfill the Ax. 3.1 requirements. The second theorem shows that GPN follows Ax. 3.2 and guarantees that a node v becomes more epistemically certain if its neighbors are more epistemically certain: Theorem 2. Lets consider a GPN model. Then, given a node v, the aggregated feature evidence α agg,(v) 0 is increasing if the feature evidence α ft,(u) 0 of one of its neighbors u ∈ N (v) is increasing. The proof directly relies on Eq. 3. Intuitively, this theorem states that the epistemic uncertainty u (v) epist = −α agg, (v) 0 of a node v with network effects decreases if the epistemic uncertainty of the neighboring nodes without network effects decreases. Note that contrary to GPN, methods which do not model the epistemic uncertainty explicitly (e.g. GCN [46], GAT [92] or APPNP [48]) are not guaranteed to fulfil Ax. 3.2. The third theorem shows that GPN follows Ax. 3.3. It guarantees that a node v becomes more aleatorically uncertain if its neighbors are more aleatorically uncertain, or if a neighbor prediction disagrees more with the current node prediction: Theorem 3. Lets consider a GPN model. Lets denote p̄agg, (v) = βagg,(v)/αagg,(v)0 the diffused categorical prediction for node v where c∗ is its winning class. Further, lets denote p̄ft, (u) = βft,(v)/αft,(v)0 the nondiffused categorical prediction for a node u ∈ V . First, there exists normalized weights Π′v,u such that ∑ u∈V Π ′ v,uH [ Cat(p̄ft, (u)) ] ≤ H [ Cat(p̄agg, (v)) ] . Second, if for any node u ∈ V the probability of p̄ft, (u)c∗ decreases, then H [ Cat(p̄agg, (v)) ] increases. The proof of the first part of the theorem is based on the entropy convexity. Intuitively, it states that the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] of a node v with network effects is lower bounded by a weighted average of the aleatoric uncertainty without network effects of its soft neighborhood. The second part of the theorem intuitively states that if the prediction of a neighboring node u without neighbor effects disagrees more with the current class prediction c∗ of the node v, then the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] with network effects becomes higher. Note that contrary to GPN, methods which do not use edges (e.g. PostNet [14]) cannot validate Ax. 3.3 and Ax. 3.2. 3.4 Limitations & Impact OOD data close to ID data. While GPN is guaranteed to provide consistent uncertainty estimates for nodes with extreme OOD features, it does not guarantee any specific uncertainty estimation behavior for OOD data close to ID data. Note that there exist two possible desired behaviors for OOD close to ID data: being robust to small dataset shifts [78, 89] or detect near OOD data [98, 50, 13]. The duality of these two views makes unclear what would be the desired behavior even for i.i.d. data. Non-homophilic uncertainty. Our approach assumes that connected nodes are likely to have similar uncertainty estimates as defined in Ax. 3.2 and Ax. 3.3. Contrary to [105], we do not tackle the problem of heterophilic graphs where two neighboring nodes might reasonably have different uncertainty estimates. Task-specific OOD. Density estimation is shown to be inappropriate for OOD detection when acting directly on raw images [72, 17, 71] or on arbitrarily transformed space [54]. One of the reasons is that normalizing flows learn pixel correlations in images. This phenomena does not happen for tabular data with more semantic features [47]. First note that, similarly to tabular data, semantic node features are less likely to suffer from the same flaws. Second, following previous works [14, 15, 47, 69, 98], GPN mitigates this issue by using density estimation on a latent space which is low-dimensional and task-specific. Nonetheless, we emphasize that GPN provides predictive uncertainty estimates which depends on the considered task i.e. OOD data w.r.t. features which are not useful for the specific task are likely not to be encoded in the latent space, and thus not to be detected. Broader Impact. The Assessment List for Trustworthy AI (ALTAI) [1] includes robustness, safety, and accountability. Uncertainty estimation is a key element to make AI systems follow these values. For example. an automated decision maker should know when it does not know. In this regard, GPN significantly improves the reliability of predictions on interdependent data under perturbations even though a user should not blindly rely on it. Further, ALTAI also mentions privacy and fairness. Therein, we raise awareness on the risk of using interconnected information which can amplify privacy or fairness violation in the presence of personal data. 4 Experiments In this section, we provide an extensive evaluation set-up for uncertainty quantification for node classification. It compares GPN to 13 baselines on 8 datasets and consists in two task types. First, we evaluate the detection of OOD nodes with features perturbations and Left-Out classes. Second, we evaluate the robustness of accuracy, calibration and uncertainty metrics w.r.t. feature and edge shifts. 4.1 Set-up Ablation. In the experiments, GPN uses a MLP as feature encoder, radial normalizing flows [82] for the density estimation and a certainty budget N which scales with respect to the latent dimension [15]. We provide an ablation study covering aleatoric uncertainty through APPNP, feature-level estimates through PostNet, diffusing resulting pseudo-counts after training, and GPN with diffusion of log(βft,(v)c ) instead of β ft,(v) c (see App. E.1). The complete GPN model outperforms the ablated models for uncertainty estimation. Further, we provide a hyper-parameter study covering for example different number of flow layers, latent dimensions, PPR teleport probabilities (see App. E.2)). Baselines. We used 13 baselines covering a wide variety of models for semi-supervised node classification and uncertainty estimation. We show the results of 5 baselines in the main paper and the full results in appendix. It contains two standard GNNs (i.e. Vanilla GCN VGCN [46, 87] and APPNP [48]), one robust GNN (i.e. RGCN [104]), one dropout-based method for GNN (i.e. DropEdge [83]), two Graph Gaussian Processes methods (i.e. GGP [74] and Matern-GGP [12]), the Graph-based Kernel Dirichlet GCN method (i.e. GKDE-GCN [102]) and two parameter-less methods (i.e. GKDE [102] and Label Propagation LP see App.). Further, we also compared to direct adaptation of dropout (i.e. VGCN-Dropout[29]), ensemble (i.e. VGCN-Ensemble [52]), BNN (i.e. VGCN-BNN [9]) and energy-based models (i.e. VGCN-Energy [57]) to vanilla GCNs. All models are trained using the same number of layers and similar number of hidden dimensions. We used early stopping and report the used hyperparameters in appendix. The results are averaged over 10 initialization seeds per split. Further model details are given in appendix. Datasets. We used 8 datasets with different properties summarized in appendix. We show the results of 3 datasets in the main paper and the full results in appendix. It contains common citation network datasets (i.e. CoraML [65, 32, 31, 85], CiteSeer [32, 31, 85], PubMed [73], CoauthorPhysics [87] CoauthorCS [87]) and co-purchase datasets (i.e. AmazonPhotos [64, 87], AmazonComputers [64, 87]). The results are averaged over 10 initialization splits with a train/val/test split of 5%/15%/80% using stratified sampling. Further, we evaluate on the large OGBN Arxiv dataset with 169, 343 nodes and 2, 315, 598 edges [43, 94]. Further dataset details are given in the appendix. 4.2 Results OOD Detection. In this section, we evaluate uncertainty estimation for OOD detection. To this end, we use the Area Under Receiving Operator Characteristics Curve (AUC-ROC) with aleatoric scores u(v)alea (Alea) and epistemic scores u (v) epist (Epist) similarly to [14, 102, 60, 63, 61, 57]. For GPN, we differentiate between epistemic uncertainty scores without network effects (w/o Net.) and with network effects (w/ Net.). Further, we report results with the Area Under the Precision-Recall Curve (AUC-PR) in appendix. The definition of OOD for nodes in the presence of feature and network information is more complex than for i.i.d. input features. Hence, we propose two types of OOD nodes: nodes with OOD feature perturbations and nodes from Left-Out classes. For feature perturbations, we compute the accuracy on the perturbed nodes (OOD-Acc) to evaluate if the model can correct anomalous features. For Left-Out classes, we compute the accuracy on the observed classes (ID-Acc). We report the short results in Tab. 1. We set a threshold of 64 GiB and 12 hours per training run. We also exclude methods which do not use attributes for detection of OOD feature perturbations. Feature perturbations: These perturbations aim at isolating the contribution of the node feature information on the model predictions. To this end, we randomly select a subset of the nodes. For each single node v, we perturb individually its features using a Bernoulli or a Normal distribution (i.e. x(v) ∼ Ber(0.5) and x(v) ∼ N (0,1)) keeping all other node features fixed. We then compare the uncertainty prediction on the perturbed and unperturbed node. On one hand, Bernoulli noise corresponds to small perturbations in the domain of discrete bag-of-words features. On the other hand, Normal noise corresponds to extreme perturbations out of the domain of discrete bag-of-words features. In practice, we expect out-of-domain perturbations to be easily detected [14]. First, we remark that uncertainty estimates of GPN based on features achieves an absolute improvement of at least +15% and +29% for Bernoulli and Normal perturbations over all baselines using network effects. This shows that GPN disentangles well the uncertainty without and with network effects. Second, we remark that all uncertainty estimates with network effects achieve poor results. This is expected if models can recover the correct prediction after aggregation steps. Specifically, we observe that GPN achieves an accuracy improvement between +16% and +64% for Normal perturbations on perturbed nodes compared to baselines. It stresses that GPN performs a consistent evidence aggregation from neighborhood to recover from anomalous features. Further, note that GPN is still capable to detect those perturbed nodes almost perfectly using feature uncertainty. These remarks aligns with Ax. 3.1. Left-Out classes: Detection of Left-Out classes involves both feature and neighborhood information. In this case, we remove the Left-Out classes from the training set but keep them in the graph similarly to [102]. We observe that the uncertainty estimates with network effects of GPN achieves an absolute improvement between +12% and +16% compared to its uncertainty estimates without network effects. It highlights the benefit of incorporating network information for uncertainty predictions when OOD samples (i.e. samples from the Left-Out classes) are likely to be connected to each other. This remark aligns with Ax. 3.2. Further, GPN outperforms other baselines by +2% to +22% for LOC detection while maintaining a competitive accuracy on other classes. Misclassified samples: In addition to the OOD scores, we also report the results for the detection of misclassified samples with aleatoric and epistemic uncertainty on several datasets and models in App. E.3 for the sake of completeness. GPN performs competitively with the baselines. Moreover, we observe that epistemic uncertainty is better for OOD detection and aleatoric uncertainty is better for misclassification detection as already observed e.g. in [102]. Attributed Graph Shifts. In this section, we focus on evaluating the robustness of the accuracy, calibration and the evolution of the uncertainty estimation under node feature shifts and edges shifts. This aligns with [78] which aims at evaluating the reliability of uncertainty estimates under dataset shifts for i.i.d. inputs. Specifically, we evaluates the evolution of the accuracy, the ECE [70] calibration score, the epistemic and the aleatoric uncertainty measures. Feature shifts: We perturbed the features of a fraction of the nodes using unit Gaussian perturbations. We report the short results in Fig. 3 and the full results in appendix. On one hand, we observe that GPN is significantly more robust to feature perturbations than all baselines. Indeed, the accuracy of GPN decreases by less than 5% even when 80% of the nodes are perturbed while the accuracy of other baselines decreases by more than 50% when only 20% of the nodes are perturbed. Similarly, we observed that GPN remains calibrated even when a high fraction of nodes are perturbed contrary to baselines. Hence, GPN intuitively discards uncertain features from perturbed nodes and only accounts for certain features from other nodes for more accurate predictions. On the other hand, we observe that, as desired, the average epistemic uncertainty of GPN consistently decreases when more nodes are perturbed. This remark aligns with Ax. 3.2. In contrast, baselines dangerously become more certain while achieving a poorer accuracy similarly to ReLU networks [39]. Hence GPN predictions are significantly more reliable than baselines under feature shifts. Edge shifts: For edge shifts, we perturbed a fraction of edges at random. We report the results in appendix. As desired, we observe that the aleatoric uncertainty increases for all models including GPN. This aligns with Ax. 3.3 and the expectations that conflicting neighborhood should lead to more aleatorically uncertain predictions. Furthermore, the average epistemic uncertainty of GPN remains constant which is reasonable since the average evidence of a node’s neighborhood remains constant. Qualitative Evaluation. We show the abstracts of the CoraML papers achieving the highest and the lowest epistemic uncertainty without network effects in Tab. 2 and in the appendix. Interestingly, we observed that most uncertain papers corresponds to short and unconventional abstracts which can be seen as anomalous features. Furthermore, we also ranked the nodes w.r.t. to their epistemic uncertainty with network effects. In this case, we observed that 78/100 nodes with the highest uncertainty do not belong to the largest connected component of the CoraML dataset. We propose additional uncertainty visualizations for GPN in App. E.6. Inference & training time. We provide a comparison of inference and training times for most of the datasets and models under consideration in in App. E.7. GPN needs a single pass for uncertainty estimation but requires the additional evaluation of one normalizing flow per class compared to APPNP. Hence, GPN brings a small computational overhead for uncertainty estimation at inference time. Furthermore, GPN is usually converging relatively fast during training and does not require precomputing kernel values. In contrast, GKDE-GCN [102] requires the computation of the underlying Graph Kernel with a complexity of O ( N2 ) where N is the number of nodes in the graph. Finally, GPN is significantly more efficient than dropout or ensemble approaches as it does not require training or evaluating multiple models. 5 Conclusion We introduce a well-grounded framework for uncertainty estimation on interdependent nodes. First, we propose explicit and motivated axioms describing desired properties for aleatoric and epistemic uncertainty in the absence or in the presence of network effects. Second, we propose GPN, a GNN for uncertainty estimation which provably follows our axioms. GPN performs a Bayesian update over the class predictions based on density estimation and diffusion. Third, we conduct extensive experiments to evaluate the uncertainty performances of a broad range of baselines for OOD detection and robustness against node feature or edge shifts. GPN outperforms all baselines in these experiments. Acknowledgments and Disclosure of Funding This research was supported by the BMW AG, by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS“, and by a grant from Software Campus through the German Federal Ministry of Education and Research.
1. What are the three contributions of the paper, and how do they relate to each other? 2. How does the proposed model behave with respect to its characterization of uncertainty, and what are the implications of this behavior? 3. What are the strengths and weaknesses of the proposed model in terms of its ability to quantify uncertainty, and how does it compare to existing methods? 4. What are the key concepts and definitions used in the paper, and how are they related to each other? 5. How does the paper's approach to predictive uncertainty differ from other approaches in the field, and what are the advantages and disadvantages of this approach?
Summary Of The Paper Review
Summary Of The Paper The paper makes three contributions: (i) three axioms are specified to characterize the requirements of predictive uncertainty behaviour in homophilic attributed graphs; (ii) a new inference model is proposed and theorems are provided that demonstrate how the model behaves with respect to its characterization of uncertainty; and (iii) via numerical experiments, it is demonstrated that the proposed model achieves better uncertainty estimation performance. Review The paper contains some very interesting ideas and the proposed model is well-constructed. The experiments are thorough, with comparison to multiple baselines across 8 datasets. These experiments demonstrate that the proposed model significantly outperforms existing methods in terms of its uncertainty quantification for the task of node classification. On the negative side, the paper is very difficult to read, and there needs to be considerably more care with definitions of terms and explanations of concepts. (1) The axioms are imprecisely stated and contain important elements that are not defined. The figures used to explain the axioms introduce concepts and variables that are not defined or explained until later in Section 3.2. For example, it is not clear what Dirichlet distribution is being referred to. In Axiom 3.1 it is written “A node with features more different from training features should be assigned higher uncertainty”. This sentence raises several questions. How is “different” defined or measured? What does it mean to “assign a node higher uncertainty”? When the axiom is imprecise in this way, it is very difficult to claim that a particular model obeys it. The authors claim that “The first theorem shows that GPN follows Ax. 3.1” but it actually shows something that is much more specific. It only characterizes the behaviour of the prediction as the distance (measured in any way) approaches infinity. It does not establish that for one node with “more different” features than another the uncertainty in the prediction will be higher (for some definition of “different” and some definition of “uncertainty”). Axiom 3.2 states that “a node’s prediction… should have higher aleatoric uncertainty”. At this stage of the paper, there is no quantitative definition of aleatoric or epistemic uncertainty. The approaches to quantitatively assess these are only made clear in Section 3.2. The text after Axiom 3.2 is confusing: “a node v with confident feature predictions x ( v ) . Above x ( v ) are specified as node features. Now they are node feature “predictions”? In a similar vein, “more conflicting” in Axiom 3.3. is not defined. Are the axioms supposed to relate to only a predictive model under a specific framework for uncertainty characterization? Or do we need to provide definitions and metrics for of all of these concepts – “uncertainty”, “aleatoric uncertainty”, “epistemic uncertainty”, “different”, “conflicting” – as well as the predictive model? (2) It is extremely difficult to understand the proposed method and model from Section 3.2 alone. There is too much reliance on material from [14, 15]. While it is fine to refer a reader to other papers for further information and detail, the main method should be clear for a reader without it being essential to refer to other work. For example, phrases like “the epistemic uncertainty can be measured by the pseudo-count α 0 of fictitious observations” are very hard to understand when “fictitious observations” have not been defined or introduced. Overall, I would like to recommend the paper for acceptance based on its technical content, but I think the axioms and the presentation need major improvement. After author response: I acknowledge that the authors have provided a thorough commentary regarding my criticisms and they can all be addressed by modifying the text (although I view these changes as being important, since I think some of the claims about the theoretical results are not strictly correct as they stand and the axioms are vague). I have raised my overall score.
NIPS
Title Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification Abstract The interdependence between nodes in graphs is key to improve class predictions on nodes and utilized in approaches like Label Propagation (LP) or in Graph Neural Networks (GNNs). Nonetheless, uncertainty estimation for non-independent node-level predictions is under-explored. In this work, we explore uncertainty quantification for node classification in three ways: (1) We derive three axioms explicitly characterizing the expected predictive uncertainty behavior in homophilic attributed graphs. (2) We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on interdependent nodes. GPN provably obeys the proposed axioms. (3) We extensively evaluate GPN and a strong set of baselines on semi-supervised node classification including detection of anomalous features, and detection of left-out classes. GPN outperforms existing approaches for uncertainty estimation in the experiments. 1 Introduction Accurate and rigorous uncertainty estimation is key for reliable machine learning models in safetycritical domains [67]. It quantifies the confidence of machine learning models, thus allowing them to validate knowledgeable predictions or flag predictions on unknown input domains. Uncertainty is commonly divided in aleatoric and epistemic uncertainty [28]. The aleatoric uncertainty accounts for irreducible uncertainty (e.g., due to inherent sensor noise). The epistemic uncertainty accounts for a lack of information for accurate prediction (e.g., test data significantly different from training data). Traditionally, machine learning models assume i.i.d. inputs, thus performing predictions based on input features only. For uncertainty estimation on i.i.d. inputs, a large class of definitions, models and evaluation methods have been introduced [28, 62, 3, 78, 50]. Further, uncertainty estimation has been successfully applied to different tasks e.g. out-of-distribution (OOD) or shift detection [78], active learning [75, 55], continual learning [4] or reinforcement learning [18]. In contrast, uncertainty estimation on interdependent nodes is more complex than on i.i.d. inputs and under-explored [3]. A node in an attributed graph is characterized by two types of information: its features and its neighborhood. While the feature information indicates the node position in the feature space – similarly to i.i.d. inputs –, the neighborhood information indicates the additional node position in the network space. To leverage the neighborhood information, recent graph neural networks (GNNs) successfully proposed to enrich and correct the possibly noisy information of the features of a single node by aggregating them with the features of its neighborhood [46, 92, 48]. It naturally leads to the distinction between predictions without network effects based exclusively on their own node feature representation, and predictions with network effects based on neighborhood ∗equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). aggregation. The aggregation step commonly assumes network homophily which states that nodes with similar properties tend to connect to each other more densely, thus violating the i.i.d. assumption between node features given their neighborhood. The core motivation of our work is to transfer some of the existing uncertainty estimation definitions, models and evaluations from i.i.d. inputs to interdependent node inputs by leveraging both the feature and the neighborhood information. In particular, we aim at an accurate quantification of the aleatoric and epistemic uncertainty without and with network effect under network homophily (see Fig. 1). Our contribution. In this work, we consider uncertainty estimation on semi-supervised node classification. First, we derive three axioms which materialize reasonable uncertainty for non-independent inputs. These axioms cover the traditional notions of aleatoric and epistemic uncertainty and distinguish between the uncertainty with and without network effects. Second, we propose Graph Posterior Network (GPN)2 for uncertainty estimation for node classification and prove formally that it follows the axiom requirements contrary to popular GNNs. Third, we build an extensive evaluation setup for uncertainty estimation which relies on the assessment of uncertainty estimation quality of OOD detection and robustness against shifts of the attributed graph properties. Both OOD data and attributed graph shifts distinguish between attribute and structure anomalies. The theoretical properties of GPN manifest in these experiments where it outperforms all other baselines on uncertainty evaluation. 2 Related Work In this section, we cover the related work for predictive uncertainty estimation for i.i.d. inputs and for graphs. To this end, we review the commonly accepted axioms defining the desired uncertainty estimation under different circumstances, the methods capable of consistent uncertainty quantification and the evaluation validating the quality of the uncertainty estimates in practice. Uncertainty for i.i.d. inputs – The related work for uncertainty quantification on i.i.d. inputs is rich as for example shown in a recent survey [3]. Axioms: Far from ID data, the predicted uncertainty is expected to be high [66, 15, 51, 30]. Close to ID data, the desired uncertainty is more complicated. Indeed, while some works expected models to be robust to small dataset shifts [78, 89], other works expected to detect near OOD classes based on uncertainty [98, 50, 13]. Methods: Many methods already exist for uncertainty quantification for i.i.d. inputs like images or tabular data. A first family of models quantifies uncertainty by aggregating statistics (e.g. mean, variance or entropy) from sub-networks with different weights. Important examples are ensemble [52, 96, 97, 38], dropout [88] or Bayesian Neural Networks (BNN) [9, 20, 59, 24, 21]. Most of these approaches require multiple forward-passes for uncertainty quantification. Further, dropout and BNN may have other pitfalls regarding their limited applicability to more complex tasks [77, 41, 34, 27]. A second family quantifies uncertainty by using the logit information. Important examples are temperature scaling which rescale the logits after training [35, 56] and energy-based models which interpret the logits as energy scores [57, 33]. A third family of model quantifies uncertainty based on deep Gaussian Processes (GP). Important examples use GP at activation-level [68] or at (last) layer-level [53, 51, 91, 8]. Finally, a last 2Project page including code at https://www.daml.in.tum.de/graph-postnet family of models quantifies uncertainty by directly parameterizing a conjugate prior distribution over the target variable. Important examples explicitly parameterize prior distributions [86, 63, 60, 61, 6] or posterior distributions [14, 15]. Methods based on GP and conjugate prior usually have the advantage of deterministic and fast inference. Evaluation: Previous works have already proposed empirical evaluation of uncertainty estimation by looking at accuracy, calibration or OOD detection metrics under dataset shifts or adversarial perturbations for i.i.d. inputs [78, 50]. In contrast with all these approaches, this work studies uncertainty quantification for classification of interdependent nodes. Uncertainty for graphs – Notably, the recent survey [3] points out that there is only a limited number of studies on uncertainty quantification on GNN and semi-supervised learning. Moreover, they recommend proposing new methods. Axioms: To the best of our knowledge, only [23] proposed explicit axioms for node classification for non-attributed graphs. They expect disconnected nodes to recover prior predictions and nodes with higher beliefs to be more convincing. In this work, we clarify the desired uncertainty estimation for node classification on attributed graphs based on motivated and explicit axioms. Methods: The largest family of models for uncertainty for graphs are dropoutor Bayesian-based methods. Important examples propose to drop or assign probabilities to edges [83, 16, 37, 19, 42]. Further works proposed to combine the uncertainty on the graph structure with uncertainty on the transformation weights similarly to BNN [22, 101, 79, 80]. Importantly, these models do not directly quantify uncertainty on the prediction. Similarly to the i.i.d. case, a second family of models focuses on deterministic uncertainty quantification. Important examples mostly use Graph Gaussian Processes, which do not easily scale to large graphs [74, 103, 58, 12]. Only [102] explicitly parameterized a Dirichlet conjugate prior. They combined it with multiple components (Graph-Based Kernel, dropout, Teacher Network, loss regularizations) which cannot easily distinguish between uncertainty without and with network effects. In contrast, GPN is a simple approach based on conjugate prior parametrization and disentangles uncertainty with and without network effects. Evaluation: The evaluation of most of those methods was not focused on the quality of the uncertainty estimates but on the target task metrics (e.g. accuracy for classification, distance to ground truth for regression). Some methods focus on robustness of the target task metrics against adversarial perturbations [36, 107, 106]. Other methods only relied on uncertainty quantification to build more robust models [104, 25]. For node classification, only few works evaluated uncertainty by using Left-Out classes or detection of missclassified samples [102], active learning [74] or visualization [12]. Note that proposed uncertainty evaluations on molecules at graph level [100, 84, 5, 40, 90] is an orthogonal problem. In this work, we propose a sound and extensive evaluation for uncertainty in node classification. It distinguishes between OOD nodes w.r.t. features and structure, and graph dataset shifts w.r.t. the percentage of perturbed node features and the percentage of perturbed edges. 3 Uncertainty Quantification for Node Classification We consider the task of (semi-supervised) node classification on an attributed graph G = (A,X) with adjacency matrixA ∈ {0, 1}N×N and node attribute matrixX ∈ RN×D. We aim at inferring the labels y(v) ∈ {1, ..., C} plus the the aleatoric uncertainty u(v)alea and the epistemic uncertainty u (v) epist of unlabeled nodes v ∈ T given a set of labelled nodes u ∈ U in the graph where V = T ∪ U denotes the set of vertices. 3.1 Axioms Uncertainty estimation in the setting of interdependent inputs is not well-studied. It often leaves the expected behavior and interpretations for uncertainty estimation unclear. Thus, we need wellgrounded axioms to derive meaningful models. In this section, we aim at specifying the desired uncertainty predictions under various circumstances in homophilic attributed graphs. To this end, we propose three axioms which are based on the two following distinctions. The first distinction differentiates between aleatoric and epistemic uncertainty which are commonly used concepts under the i.i.d. assumptions [28, 62]. The second distinction differentiates between uncertainty without and with network effects which are motivated by the concepts of attribute and structure anomalies used in the attributed graph setting [11]. These new axioms cover all possible combinations encountered by these distinctions and extend the axioms proposed by [23] for non-attributed graphs. We designed the axioms to be informal and generic so that they are application independent, model-agnostic and do not require complex mathematical notations similarly to [23, 76]. In practice, formal definitions need to instantiate general concepts like aleatoric/epistemic uncertainty and with/without network effects noting that some definitions might be more convenient depending on the task. The first axiom deals with (epistemic and aleatoric) uncertainty estimation without network effects (see Fig. 1a, 1c). : Axiom 3.1. A node’s prediction in the absence of network effects should only depend on its own features. A node with features more different from training features should be assigned higher uncertainty. Axiom 3.1 states that if a node v has no neighbors, then the final prediction p(v) should only depend on its own node features x(v). Further, for anomalous features the model should fall back to safe prior predictions, indicating high aleatoric and epistemic uncertainty. This aligns with [23] which expects to recover prior predictions for non-attributed nodes without network effect, and [66, 15] which expect to recover prior predictions far from training data for i.i.d. inputs. The second axiom deals with epistemic uncertainty estimation with network effects (see Fig. 1c, 1d): Axiom 3.2. All else being equal, if a node’s prediction in the absence of network effects is more epistemically certain, then its neighbors’ predictions in the presence of network effects should become more epistemically certain. Axiom 3.2 states that a node v with confident feature predictions x(v) is expected to be convincing and make its neighbors u ∈ N (v) more confident. Conversely, a node with anomalous features is expected to make its neighborhood less confident. This axiom materializes the network homophily assumption at the epistemic level i.e. connected nodes have similar epistemic uncertainty estimates. For non-attributed graphs, [23] similarly expects a more confident node to have more influence on a direct neighbor. The third axiom deals with aleatoric uncertainty estimation with network effects (see Fig. 1a, 1b): Axiom 3.3. All else being equal, a node’s prediction in the presence of network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence of network effects have high aleatoric uncertainty. Further, a node prediction in the presence network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence network effects are more conflicting. Axiom 3.3 states that no clear classification decision should be made for a node v if no clear classification decisions can be made for its neighbors. Further, the classification decision becomes less certain if a neighbor has a conflicting classification decision. Note that this axiom is more subtle than the direct application of network homophily at the aleatoric level. Indeed a node can have a high aleatoric uncertainty contrary to its neighbors which predict different classes with low aleatoric uncertainty. This aligns with the intuition that conflicting information from the neighborhood provides an irreducible uncertainty to the considered node. 3.2 Graph Posterior Network The Bayesian update rule is a key component of GPN to model uncertainty on the predicted categorical distribution. For a single categorical distribution y ∼ Cat(p), the standard Bayesian update is straightforward. A natural choice for a prior distribution over the parameters p is its conjugate prior i.e. the Dirichlet distribution P(p) = Dir(αprior) with αpriorc ∈ RC+. Given the observations y(1), ..., y(N), the Bayesian update then consists in applying the Bayes’ theorem P ( p | {y(j)}Nj=1 ) ∝ P ( {y(j)}Nj=1 |p ) × P(p) (1) producing the posterior distribution P(p | {y(j)}Nj=1) = Dir(αpost) where αpost = αprior + β are the parameters of the posterior and βc = ∑ j = 1y(j)=c are the class counts. This framework naturally disentangles the aleatoric and epistemic uncertainty by defining the Dirichlet mean p̄ = αα0 and the total evidence count α0 = ∑ c αc. Indeed, the aleatoric uncertainty is commonly measured by the entropy of the categorical distribution i.e. ualea = H [Cat(p̄)] [62, 14, 15] and the epistemic uncertainty can be measured by the total evidence count α0 of observations i.e. uepist = −α0 [14, 15]. Alternatively, the epistemic uncertainty can also be measured with the Dirichlet differential entropy [62]. Note that the reparameterization using p̄ and α0 can apply to any class counts including the prior counts αprior, the class counts β and the posterior counts αpost. For classification, the predicted categorical distribution ŷ(v) ∼ Cat(p(v)) additionally depends on the specific input v. Hence, the input-dependent Bayesian rule [14, 15] extends the Bayesian treatment of a single categorical distribution to classification by predicting an individual posterior update for any possible input. Specifically, it first introduces a fixed Dirichlet prior over the categorical distribution p(v) ∼ Dir(αprior) where αprior ∈ RC+ is usually set to 1, and second predicts the input-dependent update β(v) which forms the posterior distribution p(v) ∼ Dir(αpost,(v)) where the posterior parameters are equal to αpost,(v) = αprior + β(v). (2) The variable β(v) can be interpreted as learned class pseudo-counts and its parametrization is crucial. For i.i.d. inputs, PostNet [14] models the pseudo-counts β(v) in two main steps. (1) it maps the inputs features x(v) onto a low-dimensional latent vector z(v) = fθ(x(v)) ∈ RH . (2), it fits one conditional probability density P(z(v)|c;φ) per class on this latent space with normalizing flows. The final pseudo count for class c is set proportional to its respective conditional density i.e. β (v) c = N P(z(v)|c;φ)P(c) where N is a total certainty budget and P(c) = 1C for balanced classes. Note that this implies α(v)0 = N P(z(v)|φ). This architecture has the advantage of decreasing the evidence outside the known distribution when increasing the evidence inside the known distribution, thus leading to consistent uncertainty estimation far from training data. Bayesian Update for Interdependent Inputs. We propose a simple yet efficient modification for parameterizing β(v)c to extend the input-dependent Bayesian update for interdependent attributed nodes. The core idea is to first predict the feature class pseudo-counts βft,(v) based on independent node features only, and then diffuse them to form the aggregated class pseudo-counts βagg,(v) based on neighborhood features. Hence, the feature class pseudo-counts βft,(v) intuitively act as uncertainty estimates without network effects while the aggregated class pseudo-counts βagg,(v) intuitively act as uncertainty estimates with network effects. To this end, GPN performs three main steps (see Fig. 2). (1) A (feature) encoder maps the features of v onto a low-dimensional latent representation z i.e. z(v) = fθ(x(v)) ∈ RH . In practice, we use a simple MLP encoder in our experiments similarly to APPNP [48]. (2) One conditional probability density per class P(z(v) | c;φ) is used to compute βft,(v)c i.e βft,(v)c ∝ P(z(v) | c;φ). Note that the the total feature evidence αft,(v)0 = ∑ c β ft,(v) c and the parameter p̄ft,(v) = β ft,(v) /αft,(v)0 are only based on node features and can be seen as epistemic and aleatoric uncertainty measures without network effects. In practice, we used radial normalizing flows for density estimation similarly to [14] and scaled the certainty N budget w.r.t. the latent dimension H similarly to [15]. (3) A Personalized Page Rank (PPR) message passing scheme is used to diffuse the feature class pseudo-counts βft,(v)c and form the aggregated class pseudo-counts βagg,(v)c i.e. βagg,(v)c = ∑ u∈V Πpprv,uβ ft,(u) c (3) where Πpprv,u are the dense PPR scores implicitly reflecting the importance of node u on v. We approximate the dense PPR scores using power iteration similarly to [48]. The aggregated pseudo-count β agg,(v) c is then used in the input-dependent Bayesian update (see Eq. 2). Remark that the scores Πpprv,u define a valid conditional distribution over all nodes associated to the PPR random walk (i.e.∑ u Π ppr v,u = 1). It can be viewed as a soft neighborhood for v accounting for all neighborhood hops through infinitely many message passing steps [48]. Hence, on one hand, the PPR scores define a probability distribution over nodes using the node edges only. On the other hand, the quantity P(z(u) | c;φ) defines a probability distribution over nodes using the node features only. Therefore, we can equiv- alently rewrite this step using probabilistic notations P(v |u) = Πpprv,u and P(u | c) = P(z(u) | c;φ): βagg,(v)c ∝ P̄(v | c) = ∑ u∈V P(v |u)P(u | c) (4) Interestingly, the quantity P̄(v | c) defines a valid distribution which normalizes over all node features and accounts for the soft neighborhood (i.e. ∫ ... ∫ P̄(v | c)dz(u1)...dz(u|V|) = 1). Hence, the message passing step is a simple but efficient method to transform the feature distributions of a single node into a joint distributions over the soft neighborhood features. Finally, the evidence α agg,(v) 0 = ∑ c β agg,(v) c and the parameter pagg,(v) = β agg,(v) /αagg,(v)0 are based on neighborhood features and can be seen as epistemic and aleatoric uncertainty measures with network effects. Remark that, the sequential processing of the features (i.e. steps (1)+(2)) and network information (i.e. step (3)) in GPN is a key element to differentiate between the uncertainty without and with network effects and is a building block to provably obey the axioms. GPN extends both APPNP [48] and PostNet [14] approaches. The key difference to APPNP is the density estimation modeling the epistemic uncertainty (i.e. steps (1)+(2)) and the input-dependent Bayesian update allowing to recover the prior prediction (i.e. Eq. 2). The key difference to PostNet is the PPR diffusion which accounts for dependence between nodes (step (3)). Optimization. We follow [14] and train GPN by minimizing the following Bayesian loss with two terms i.e.: L(v) = −Ep(v)∼Qpost,(v) [ logP(y(v) |p(v)) ] − λH [ Qpost,(v) ] (5) where λ is a regularization factor. It can be computed quickly in closed-form and provides theoretical guarantees for optimal solutions [14]. All parameters of GPN are trained jointly. Similarly to [15], we also observed that "warm-up" training for the normalizing flows is helpful. 3.3 Uncertainty Estimation Guarantees In this section, we provide theoretical guarantees showing that GPN fulfills the three axioms under mild assumptions given the specific definitions of concepts of aleatoric/epistemic uncertainty and with/without network effects presented in Sec. 3.2. Throughout this section, we consider a GPN model parameterized with a (feature) encoder fφ with piecewise ReLU activations, a PPR diffusion, and a density estimator P(zft,(v) |ω) with bounded derivatives. We present detailed proofs in appendix. The first theorem shows that GPN follows Ax. 3.1 and guarantees that GPN achieves reasonable uncertainty estimation on extreme node features without network effects: Theorem 1. Lets consider a GPN model. Let fφ(x(v)) = V (l)x(v) + a(l) be the piecewise affine representation of the ReLU network fφ on the finite number of affine regions Q(l) [7]. Suppose that V (l) have independent rows, then for any node v and almost anyx(v) we have P(fφ(δ · x(v)) | c;φ) → δ→∞ 0. Without network effects, it implies that βft,(v)c = β agg,(v) c → δ→∞ 0. The proof relies on two main points: the equivalence of the GPN and PostNet architectures without network effects, and the uncertainty guarantees of PostNet far from training data similarly to [15]. It intuitively states that, without network effects, GPN predict small evidence (i.e. βagg,(v) ≈ 0) far from training features (i.e. ||δ · x(v)|| → ∞) and thus recover the prior prediction (i.e. αpost,(v) ≈ αprior). Note that contrary to GPN, methods which do not account for node features (e.g. Label Propagation) or methods which only use ReLU activations [39] cannot validate Ax. 3.1. Further, methods which perform aggregation steps in early layers (e.g. GCN [46]) do not separate the processing of the feature and network information making unclear if they fulfill the Ax. 3.1 requirements. The second theorem shows that GPN follows Ax. 3.2 and guarantees that a node v becomes more epistemically certain if its neighbors are more epistemically certain: Theorem 2. Lets consider a GPN model. Then, given a node v, the aggregated feature evidence α agg,(v) 0 is increasing if the feature evidence α ft,(u) 0 of one of its neighbors u ∈ N (v) is increasing. The proof directly relies on Eq. 3. Intuitively, this theorem states that the epistemic uncertainty u (v) epist = −α agg, (v) 0 of a node v with network effects decreases if the epistemic uncertainty of the neighboring nodes without network effects decreases. Note that contrary to GPN, methods which do not model the epistemic uncertainty explicitly (e.g. GCN [46], GAT [92] or APPNP [48]) are not guaranteed to fulfil Ax. 3.2. The third theorem shows that GPN follows Ax. 3.3. It guarantees that a node v becomes more aleatorically uncertain if its neighbors are more aleatorically uncertain, or if a neighbor prediction disagrees more with the current node prediction: Theorem 3. Lets consider a GPN model. Lets denote p̄agg, (v) = βagg,(v)/αagg,(v)0 the diffused categorical prediction for node v where c∗ is its winning class. Further, lets denote p̄ft, (u) = βft,(v)/αft,(v)0 the nondiffused categorical prediction for a node u ∈ V . First, there exists normalized weights Π′v,u such that ∑ u∈V Π ′ v,uH [ Cat(p̄ft, (u)) ] ≤ H [ Cat(p̄agg, (v)) ] . Second, if for any node u ∈ V the probability of p̄ft, (u)c∗ decreases, then H [ Cat(p̄agg, (v)) ] increases. The proof of the first part of the theorem is based on the entropy convexity. Intuitively, it states that the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] of a node v with network effects is lower bounded by a weighted average of the aleatoric uncertainty without network effects of its soft neighborhood. The second part of the theorem intuitively states that if the prediction of a neighboring node u without neighbor effects disagrees more with the current class prediction c∗ of the node v, then the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] with network effects becomes higher. Note that contrary to GPN, methods which do not use edges (e.g. PostNet [14]) cannot validate Ax. 3.3 and Ax. 3.2. 3.4 Limitations & Impact OOD data close to ID data. While GPN is guaranteed to provide consistent uncertainty estimates for nodes with extreme OOD features, it does not guarantee any specific uncertainty estimation behavior for OOD data close to ID data. Note that there exist two possible desired behaviors for OOD close to ID data: being robust to small dataset shifts [78, 89] or detect near OOD data [98, 50, 13]. The duality of these two views makes unclear what would be the desired behavior even for i.i.d. data. Non-homophilic uncertainty. Our approach assumes that connected nodes are likely to have similar uncertainty estimates as defined in Ax. 3.2 and Ax. 3.3. Contrary to [105], we do not tackle the problem of heterophilic graphs where two neighboring nodes might reasonably have different uncertainty estimates. Task-specific OOD. Density estimation is shown to be inappropriate for OOD detection when acting directly on raw images [72, 17, 71] or on arbitrarily transformed space [54]. One of the reasons is that normalizing flows learn pixel correlations in images. This phenomena does not happen for tabular data with more semantic features [47]. First note that, similarly to tabular data, semantic node features are less likely to suffer from the same flaws. Second, following previous works [14, 15, 47, 69, 98], GPN mitigates this issue by using density estimation on a latent space which is low-dimensional and task-specific. Nonetheless, we emphasize that GPN provides predictive uncertainty estimates which depends on the considered task i.e. OOD data w.r.t. features which are not useful for the specific task are likely not to be encoded in the latent space, and thus not to be detected. Broader Impact. The Assessment List for Trustworthy AI (ALTAI) [1] includes robustness, safety, and accountability. Uncertainty estimation is a key element to make AI systems follow these values. For example. an automated decision maker should know when it does not know. In this regard, GPN significantly improves the reliability of predictions on interdependent data under perturbations even though a user should not blindly rely on it. Further, ALTAI also mentions privacy and fairness. Therein, we raise awareness on the risk of using interconnected information which can amplify privacy or fairness violation in the presence of personal data. 4 Experiments In this section, we provide an extensive evaluation set-up for uncertainty quantification for node classification. It compares GPN to 13 baselines on 8 datasets and consists in two task types. First, we evaluate the detection of OOD nodes with features perturbations and Left-Out classes. Second, we evaluate the robustness of accuracy, calibration and uncertainty metrics w.r.t. feature and edge shifts. 4.1 Set-up Ablation. In the experiments, GPN uses a MLP as feature encoder, radial normalizing flows [82] for the density estimation and a certainty budget N which scales with respect to the latent dimension [15]. We provide an ablation study covering aleatoric uncertainty through APPNP, feature-level estimates through PostNet, diffusing resulting pseudo-counts after training, and GPN with diffusion of log(βft,(v)c ) instead of β ft,(v) c (see App. E.1). The complete GPN model outperforms the ablated models for uncertainty estimation. Further, we provide a hyper-parameter study covering for example different number of flow layers, latent dimensions, PPR teleport probabilities (see App. E.2)). Baselines. We used 13 baselines covering a wide variety of models for semi-supervised node classification and uncertainty estimation. We show the results of 5 baselines in the main paper and the full results in appendix. It contains two standard GNNs (i.e. Vanilla GCN VGCN [46, 87] and APPNP [48]), one robust GNN (i.e. RGCN [104]), one dropout-based method for GNN (i.e. DropEdge [83]), two Graph Gaussian Processes methods (i.e. GGP [74] and Matern-GGP [12]), the Graph-based Kernel Dirichlet GCN method (i.e. GKDE-GCN [102]) and two parameter-less methods (i.e. GKDE [102] and Label Propagation LP see App.). Further, we also compared to direct adaptation of dropout (i.e. VGCN-Dropout[29]), ensemble (i.e. VGCN-Ensemble [52]), BNN (i.e. VGCN-BNN [9]) and energy-based models (i.e. VGCN-Energy [57]) to vanilla GCNs. All models are trained using the same number of layers and similar number of hidden dimensions. We used early stopping and report the used hyperparameters in appendix. The results are averaged over 10 initialization seeds per split. Further model details are given in appendix. Datasets. We used 8 datasets with different properties summarized in appendix. We show the results of 3 datasets in the main paper and the full results in appendix. It contains common citation network datasets (i.e. CoraML [65, 32, 31, 85], CiteSeer [32, 31, 85], PubMed [73], CoauthorPhysics [87] CoauthorCS [87]) and co-purchase datasets (i.e. AmazonPhotos [64, 87], AmazonComputers [64, 87]). The results are averaged over 10 initialization splits with a train/val/test split of 5%/15%/80% using stratified sampling. Further, we evaluate on the large OGBN Arxiv dataset with 169, 343 nodes and 2, 315, 598 edges [43, 94]. Further dataset details are given in the appendix. 4.2 Results OOD Detection. In this section, we evaluate uncertainty estimation for OOD detection. To this end, we use the Area Under Receiving Operator Characteristics Curve (AUC-ROC) with aleatoric scores u(v)alea (Alea) and epistemic scores u (v) epist (Epist) similarly to [14, 102, 60, 63, 61, 57]. For GPN, we differentiate between epistemic uncertainty scores without network effects (w/o Net.) and with network effects (w/ Net.). Further, we report results with the Area Under the Precision-Recall Curve (AUC-PR) in appendix. The definition of OOD for nodes in the presence of feature and network information is more complex than for i.i.d. input features. Hence, we propose two types of OOD nodes: nodes with OOD feature perturbations and nodes from Left-Out classes. For feature perturbations, we compute the accuracy on the perturbed nodes (OOD-Acc) to evaluate if the model can correct anomalous features. For Left-Out classes, we compute the accuracy on the observed classes (ID-Acc). We report the short results in Tab. 1. We set a threshold of 64 GiB and 12 hours per training run. We also exclude methods which do not use attributes for detection of OOD feature perturbations. Feature perturbations: These perturbations aim at isolating the contribution of the node feature information on the model predictions. To this end, we randomly select a subset of the nodes. For each single node v, we perturb individually its features using a Bernoulli or a Normal distribution (i.e. x(v) ∼ Ber(0.5) and x(v) ∼ N (0,1)) keeping all other node features fixed. We then compare the uncertainty prediction on the perturbed and unperturbed node. On one hand, Bernoulli noise corresponds to small perturbations in the domain of discrete bag-of-words features. On the other hand, Normal noise corresponds to extreme perturbations out of the domain of discrete bag-of-words features. In practice, we expect out-of-domain perturbations to be easily detected [14]. First, we remark that uncertainty estimates of GPN based on features achieves an absolute improvement of at least +15% and +29% for Bernoulli and Normal perturbations over all baselines using network effects. This shows that GPN disentangles well the uncertainty without and with network effects. Second, we remark that all uncertainty estimates with network effects achieve poor results. This is expected if models can recover the correct prediction after aggregation steps. Specifically, we observe that GPN achieves an accuracy improvement between +16% and +64% for Normal perturbations on perturbed nodes compared to baselines. It stresses that GPN performs a consistent evidence aggregation from neighborhood to recover from anomalous features. Further, note that GPN is still capable to detect those perturbed nodes almost perfectly using feature uncertainty. These remarks aligns with Ax. 3.1. Left-Out classes: Detection of Left-Out classes involves both feature and neighborhood information. In this case, we remove the Left-Out classes from the training set but keep them in the graph similarly to [102]. We observe that the uncertainty estimates with network effects of GPN achieves an absolute improvement between +12% and +16% compared to its uncertainty estimates without network effects. It highlights the benefit of incorporating network information for uncertainty predictions when OOD samples (i.e. samples from the Left-Out classes) are likely to be connected to each other. This remark aligns with Ax. 3.2. Further, GPN outperforms other baselines by +2% to +22% for LOC detection while maintaining a competitive accuracy on other classes. Misclassified samples: In addition to the OOD scores, we also report the results for the detection of misclassified samples with aleatoric and epistemic uncertainty on several datasets and models in App. E.3 for the sake of completeness. GPN performs competitively with the baselines. Moreover, we observe that epistemic uncertainty is better for OOD detection and aleatoric uncertainty is better for misclassification detection as already observed e.g. in [102]. Attributed Graph Shifts. In this section, we focus on evaluating the robustness of the accuracy, calibration and the evolution of the uncertainty estimation under node feature shifts and edges shifts. This aligns with [78] which aims at evaluating the reliability of uncertainty estimates under dataset shifts for i.i.d. inputs. Specifically, we evaluates the evolution of the accuracy, the ECE [70] calibration score, the epistemic and the aleatoric uncertainty measures. Feature shifts: We perturbed the features of a fraction of the nodes using unit Gaussian perturbations. We report the short results in Fig. 3 and the full results in appendix. On one hand, we observe that GPN is significantly more robust to feature perturbations than all baselines. Indeed, the accuracy of GPN decreases by less than 5% even when 80% of the nodes are perturbed while the accuracy of other baselines decreases by more than 50% when only 20% of the nodes are perturbed. Similarly, we observed that GPN remains calibrated even when a high fraction of nodes are perturbed contrary to baselines. Hence, GPN intuitively discards uncertain features from perturbed nodes and only accounts for certain features from other nodes for more accurate predictions. On the other hand, we observe that, as desired, the average epistemic uncertainty of GPN consistently decreases when more nodes are perturbed. This remark aligns with Ax. 3.2. In contrast, baselines dangerously become more certain while achieving a poorer accuracy similarly to ReLU networks [39]. Hence GPN predictions are significantly more reliable than baselines under feature shifts. Edge shifts: For edge shifts, we perturbed a fraction of edges at random. We report the results in appendix. As desired, we observe that the aleatoric uncertainty increases for all models including GPN. This aligns with Ax. 3.3 and the expectations that conflicting neighborhood should lead to more aleatorically uncertain predictions. Furthermore, the average epistemic uncertainty of GPN remains constant which is reasonable since the average evidence of a node’s neighborhood remains constant. Qualitative Evaluation. We show the abstracts of the CoraML papers achieving the highest and the lowest epistemic uncertainty without network effects in Tab. 2 and in the appendix. Interestingly, we observed that most uncertain papers corresponds to short and unconventional abstracts which can be seen as anomalous features. Furthermore, we also ranked the nodes w.r.t. to their epistemic uncertainty with network effects. In this case, we observed that 78/100 nodes with the highest uncertainty do not belong to the largest connected component of the CoraML dataset. We propose additional uncertainty visualizations for GPN in App. E.6. Inference & training time. We provide a comparison of inference and training times for most of the datasets and models under consideration in in App. E.7. GPN needs a single pass for uncertainty estimation but requires the additional evaluation of one normalizing flow per class compared to APPNP. Hence, GPN brings a small computational overhead for uncertainty estimation at inference time. Furthermore, GPN is usually converging relatively fast during training and does not require precomputing kernel values. In contrast, GKDE-GCN [102] requires the computation of the underlying Graph Kernel with a complexity of O ( N2 ) where N is the number of nodes in the graph. Finally, GPN is significantly more efficient than dropout or ensemble approaches as it does not require training or evaluating multiple models. 5 Conclusion We introduce a well-grounded framework for uncertainty estimation on interdependent nodes. First, we propose explicit and motivated axioms describing desired properties for aleatoric and epistemic uncertainty in the absence or in the presence of network effects. Second, we propose GPN, a GNN for uncertainty estimation which provably follows our axioms. GPN performs a Bayesian update over the class predictions based on density estimation and diffusion. Third, we conduct extensive experiments to evaluate the uncertainty performances of a broad range of baselines for OOD detection and robustness against node feature or edge shifts. GPN outperforms all baselines in these experiments. Acknowledgments and Disclosure of Funding This research was supported by the BMW AG, by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS“, and by a grant from Software Campus through the German Federal Ministry of Education and Research.
1. What is the main contribution of the paper, and how does it relate to the conference theme? 2. What are the strengths and weaknesses of the proposed framework for uncertainty estimation? 3. How does the reviewer assess the experiment part of the paper, and what suggestions do they have for improvement? 4. Are there any minor issues or typos in the paper that the reviewer has noticed?
Summary Of The Paper Review
Summary Of The Paper This paper introduced a well-grounded framework for uncertainty estimation on interdependent nodes. This paper first proposed explicit and motivated axioms describing desired properties for aleatoric and epistemic uncertainty in the absence or in the presence of network effects. Then it proposed GPN, a GNN for uncertainty estimation which provably follows our axioms. Furthermore, extensive experiments was conducted to evaluate the uncertainty performances of a broad range of baselines for OOD detection and robustness against node features or edge shifts. GPN outperforms all baselines in these experiments. Review This paper is well written and pertinent to the NeurIPS. The related work is extensive and impressive, I like the Axioms style. But I have some concerns about the experiment part. Since GPN was designed based on APPNP and PostNet, it is necessary to do an ablation study to analyze the contribution of each component. It might be necessary to add one important baseline, PostNet (ignore graph) or PostNet+GCN (simply combine PostNet and GCN), to show the advantage of GPN over PostNet in graph data. For baseline GKD-GCN, it should be evaluated based on vacuity uncertainty, which is naturally designed to detect OOD. Note that GKD-GCN was proposed to improve the estimation of vacuity. It is better to evaluate the proposed method in the misclassification detection task, which is an important task in uncertainty estimation [1] [2]. And this paper also discussed aleatoric uncertainty, which is not appropriate for OOD detection but is suitable for misclassification detection. Why was GPN robust to feature perturbations, especially for accuracy? Which component supports this observation? Some minor issues: The indexes (1), (2), (3) are missing in Figure 2. Typo in Table 1, “GDK-GCN” → “GKD-GCN”. Typo in Line-358, “Tab. 3” → “Figure 3” [1] Hendrycks, Dan, and Kevin Gimpel. "A baseline for detecting misclassified and out-of-distribution examples in neural networks." ICLR 2017. [2] Malinin, Andrey, and Mark Gales. "Predictive uncertainty estimation via prior networks." NeurIPS 2018.
NIPS
Title Graph Posterior Network: Bayesian Predictive Uncertainty for Node Classification Abstract The interdependence between nodes in graphs is key to improve class predictions on nodes and utilized in approaches like Label Propagation (LP) or in Graph Neural Networks (GNNs). Nonetheless, uncertainty estimation for non-independent node-level predictions is under-explored. In this work, we explore uncertainty quantification for node classification in three ways: (1) We derive three axioms explicitly characterizing the expected predictive uncertainty behavior in homophilic attributed graphs. (2) We propose a new model Graph Posterior Network (GPN) which explicitly performs Bayesian posterior updates for predictions on interdependent nodes. GPN provably obeys the proposed axioms. (3) We extensively evaluate GPN and a strong set of baselines on semi-supervised node classification including detection of anomalous features, and detection of left-out classes. GPN outperforms existing approaches for uncertainty estimation in the experiments. 1 Introduction Accurate and rigorous uncertainty estimation is key for reliable machine learning models in safetycritical domains [67]. It quantifies the confidence of machine learning models, thus allowing them to validate knowledgeable predictions or flag predictions on unknown input domains. Uncertainty is commonly divided in aleatoric and epistemic uncertainty [28]. The aleatoric uncertainty accounts for irreducible uncertainty (e.g., due to inherent sensor noise). The epistemic uncertainty accounts for a lack of information for accurate prediction (e.g., test data significantly different from training data). Traditionally, machine learning models assume i.i.d. inputs, thus performing predictions based on input features only. For uncertainty estimation on i.i.d. inputs, a large class of definitions, models and evaluation methods have been introduced [28, 62, 3, 78, 50]. Further, uncertainty estimation has been successfully applied to different tasks e.g. out-of-distribution (OOD) or shift detection [78], active learning [75, 55], continual learning [4] or reinforcement learning [18]. In contrast, uncertainty estimation on interdependent nodes is more complex than on i.i.d. inputs and under-explored [3]. A node in an attributed graph is characterized by two types of information: its features and its neighborhood. While the feature information indicates the node position in the feature space – similarly to i.i.d. inputs –, the neighborhood information indicates the additional node position in the network space. To leverage the neighborhood information, recent graph neural networks (GNNs) successfully proposed to enrich and correct the possibly noisy information of the features of a single node by aggregating them with the features of its neighborhood [46, 92, 48]. It naturally leads to the distinction between predictions without network effects based exclusively on their own node feature representation, and predictions with network effects based on neighborhood ∗equal contribution 35th Conference on Neural Information Processing Systems (NeurIPS 2021). aggregation. The aggregation step commonly assumes network homophily which states that nodes with similar properties tend to connect to each other more densely, thus violating the i.i.d. assumption between node features given their neighborhood. The core motivation of our work is to transfer some of the existing uncertainty estimation definitions, models and evaluations from i.i.d. inputs to interdependent node inputs by leveraging both the feature and the neighborhood information. In particular, we aim at an accurate quantification of the aleatoric and epistemic uncertainty without and with network effect under network homophily (see Fig. 1). Our contribution. In this work, we consider uncertainty estimation on semi-supervised node classification. First, we derive three axioms which materialize reasonable uncertainty for non-independent inputs. These axioms cover the traditional notions of aleatoric and epistemic uncertainty and distinguish between the uncertainty with and without network effects. Second, we propose Graph Posterior Network (GPN)2 for uncertainty estimation for node classification and prove formally that it follows the axiom requirements contrary to popular GNNs. Third, we build an extensive evaluation setup for uncertainty estimation which relies on the assessment of uncertainty estimation quality of OOD detection and robustness against shifts of the attributed graph properties. Both OOD data and attributed graph shifts distinguish between attribute and structure anomalies. The theoretical properties of GPN manifest in these experiments where it outperforms all other baselines on uncertainty evaluation. 2 Related Work In this section, we cover the related work for predictive uncertainty estimation for i.i.d. inputs and for graphs. To this end, we review the commonly accepted axioms defining the desired uncertainty estimation under different circumstances, the methods capable of consistent uncertainty quantification and the evaluation validating the quality of the uncertainty estimates in practice. Uncertainty for i.i.d. inputs – The related work for uncertainty quantification on i.i.d. inputs is rich as for example shown in a recent survey [3]. Axioms: Far from ID data, the predicted uncertainty is expected to be high [66, 15, 51, 30]. Close to ID data, the desired uncertainty is more complicated. Indeed, while some works expected models to be robust to small dataset shifts [78, 89], other works expected to detect near OOD classes based on uncertainty [98, 50, 13]. Methods: Many methods already exist for uncertainty quantification for i.i.d. inputs like images or tabular data. A first family of models quantifies uncertainty by aggregating statistics (e.g. mean, variance or entropy) from sub-networks with different weights. Important examples are ensemble [52, 96, 97, 38], dropout [88] or Bayesian Neural Networks (BNN) [9, 20, 59, 24, 21]. Most of these approaches require multiple forward-passes for uncertainty quantification. Further, dropout and BNN may have other pitfalls regarding their limited applicability to more complex tasks [77, 41, 34, 27]. A second family quantifies uncertainty by using the logit information. Important examples are temperature scaling which rescale the logits after training [35, 56] and energy-based models which interpret the logits as energy scores [57, 33]. A third family of model quantifies uncertainty based on deep Gaussian Processes (GP). Important examples use GP at activation-level [68] or at (last) layer-level [53, 51, 91, 8]. Finally, a last 2Project page including code at https://www.daml.in.tum.de/graph-postnet family of models quantifies uncertainty by directly parameterizing a conjugate prior distribution over the target variable. Important examples explicitly parameterize prior distributions [86, 63, 60, 61, 6] or posterior distributions [14, 15]. Methods based on GP and conjugate prior usually have the advantage of deterministic and fast inference. Evaluation: Previous works have already proposed empirical evaluation of uncertainty estimation by looking at accuracy, calibration or OOD detection metrics under dataset shifts or adversarial perturbations for i.i.d. inputs [78, 50]. In contrast with all these approaches, this work studies uncertainty quantification for classification of interdependent nodes. Uncertainty for graphs – Notably, the recent survey [3] points out that there is only a limited number of studies on uncertainty quantification on GNN and semi-supervised learning. Moreover, they recommend proposing new methods. Axioms: To the best of our knowledge, only [23] proposed explicit axioms for node classification for non-attributed graphs. They expect disconnected nodes to recover prior predictions and nodes with higher beliefs to be more convincing. In this work, we clarify the desired uncertainty estimation for node classification on attributed graphs based on motivated and explicit axioms. Methods: The largest family of models for uncertainty for graphs are dropoutor Bayesian-based methods. Important examples propose to drop or assign probabilities to edges [83, 16, 37, 19, 42]. Further works proposed to combine the uncertainty on the graph structure with uncertainty on the transformation weights similarly to BNN [22, 101, 79, 80]. Importantly, these models do not directly quantify uncertainty on the prediction. Similarly to the i.i.d. case, a second family of models focuses on deterministic uncertainty quantification. Important examples mostly use Graph Gaussian Processes, which do not easily scale to large graphs [74, 103, 58, 12]. Only [102] explicitly parameterized a Dirichlet conjugate prior. They combined it with multiple components (Graph-Based Kernel, dropout, Teacher Network, loss regularizations) which cannot easily distinguish between uncertainty without and with network effects. In contrast, GPN is a simple approach based on conjugate prior parametrization and disentangles uncertainty with and without network effects. Evaluation: The evaluation of most of those methods was not focused on the quality of the uncertainty estimates but on the target task metrics (e.g. accuracy for classification, distance to ground truth for regression). Some methods focus on robustness of the target task metrics against adversarial perturbations [36, 107, 106]. Other methods only relied on uncertainty quantification to build more robust models [104, 25]. For node classification, only few works evaluated uncertainty by using Left-Out classes or detection of missclassified samples [102], active learning [74] or visualization [12]. Note that proposed uncertainty evaluations on molecules at graph level [100, 84, 5, 40, 90] is an orthogonal problem. In this work, we propose a sound and extensive evaluation for uncertainty in node classification. It distinguishes between OOD nodes w.r.t. features and structure, and graph dataset shifts w.r.t. the percentage of perturbed node features and the percentage of perturbed edges. 3 Uncertainty Quantification for Node Classification We consider the task of (semi-supervised) node classification on an attributed graph G = (A,X) with adjacency matrixA ∈ {0, 1}N×N and node attribute matrixX ∈ RN×D. We aim at inferring the labels y(v) ∈ {1, ..., C} plus the the aleatoric uncertainty u(v)alea and the epistemic uncertainty u (v) epist of unlabeled nodes v ∈ T given a set of labelled nodes u ∈ U in the graph where V = T ∪ U denotes the set of vertices. 3.1 Axioms Uncertainty estimation in the setting of interdependent inputs is not well-studied. It often leaves the expected behavior and interpretations for uncertainty estimation unclear. Thus, we need wellgrounded axioms to derive meaningful models. In this section, we aim at specifying the desired uncertainty predictions under various circumstances in homophilic attributed graphs. To this end, we propose three axioms which are based on the two following distinctions. The first distinction differentiates between aleatoric and epistemic uncertainty which are commonly used concepts under the i.i.d. assumptions [28, 62]. The second distinction differentiates between uncertainty without and with network effects which are motivated by the concepts of attribute and structure anomalies used in the attributed graph setting [11]. These new axioms cover all possible combinations encountered by these distinctions and extend the axioms proposed by [23] for non-attributed graphs. We designed the axioms to be informal and generic so that they are application independent, model-agnostic and do not require complex mathematical notations similarly to [23, 76]. In practice, formal definitions need to instantiate general concepts like aleatoric/epistemic uncertainty and with/without network effects noting that some definitions might be more convenient depending on the task. The first axiom deals with (epistemic and aleatoric) uncertainty estimation without network effects (see Fig. 1a, 1c). : Axiom 3.1. A node’s prediction in the absence of network effects should only depend on its own features. A node with features more different from training features should be assigned higher uncertainty. Axiom 3.1 states that if a node v has no neighbors, then the final prediction p(v) should only depend on its own node features x(v). Further, for anomalous features the model should fall back to safe prior predictions, indicating high aleatoric and epistemic uncertainty. This aligns with [23] which expects to recover prior predictions for non-attributed nodes without network effect, and [66, 15] which expect to recover prior predictions far from training data for i.i.d. inputs. The second axiom deals with epistemic uncertainty estimation with network effects (see Fig. 1c, 1d): Axiom 3.2. All else being equal, if a node’s prediction in the absence of network effects is more epistemically certain, then its neighbors’ predictions in the presence of network effects should become more epistemically certain. Axiom 3.2 states that a node v with confident feature predictions x(v) is expected to be convincing and make its neighbors u ∈ N (v) more confident. Conversely, a node with anomalous features is expected to make its neighborhood less confident. This axiom materializes the network homophily assumption at the epistemic level i.e. connected nodes have similar epistemic uncertainty estimates. For non-attributed graphs, [23] similarly expects a more confident node to have more influence on a direct neighbor. The third axiom deals with aleatoric uncertainty estimation with network effects (see Fig. 1a, 1b): Axiom 3.3. All else being equal, a node’s prediction in the presence of network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence of network effects have high aleatoric uncertainty. Further, a node prediction in the presence network effects should have higher aleatoric uncertainty if its neighbors’ predictions in the absence network effects are more conflicting. Axiom 3.3 states that no clear classification decision should be made for a node v if no clear classification decisions can be made for its neighbors. Further, the classification decision becomes less certain if a neighbor has a conflicting classification decision. Note that this axiom is more subtle than the direct application of network homophily at the aleatoric level. Indeed a node can have a high aleatoric uncertainty contrary to its neighbors which predict different classes with low aleatoric uncertainty. This aligns with the intuition that conflicting information from the neighborhood provides an irreducible uncertainty to the considered node. 3.2 Graph Posterior Network The Bayesian update rule is a key component of GPN to model uncertainty on the predicted categorical distribution. For a single categorical distribution y ∼ Cat(p), the standard Bayesian update is straightforward. A natural choice for a prior distribution over the parameters p is its conjugate prior i.e. the Dirichlet distribution P(p) = Dir(αprior) with αpriorc ∈ RC+. Given the observations y(1), ..., y(N), the Bayesian update then consists in applying the Bayes’ theorem P ( p | {y(j)}Nj=1 ) ∝ P ( {y(j)}Nj=1 |p ) × P(p) (1) producing the posterior distribution P(p | {y(j)}Nj=1) = Dir(αpost) where αpost = αprior + β are the parameters of the posterior and βc = ∑ j = 1y(j)=c are the class counts. This framework naturally disentangles the aleatoric and epistemic uncertainty by defining the Dirichlet mean p̄ = αα0 and the total evidence count α0 = ∑ c αc. Indeed, the aleatoric uncertainty is commonly measured by the entropy of the categorical distribution i.e. ualea = H [Cat(p̄)] [62, 14, 15] and the epistemic uncertainty can be measured by the total evidence count α0 of observations i.e. uepist = −α0 [14, 15]. Alternatively, the epistemic uncertainty can also be measured with the Dirichlet differential entropy [62]. Note that the reparameterization using p̄ and α0 can apply to any class counts including the prior counts αprior, the class counts β and the posterior counts αpost. For classification, the predicted categorical distribution ŷ(v) ∼ Cat(p(v)) additionally depends on the specific input v. Hence, the input-dependent Bayesian rule [14, 15] extends the Bayesian treatment of a single categorical distribution to classification by predicting an individual posterior update for any possible input. Specifically, it first introduces a fixed Dirichlet prior over the categorical distribution p(v) ∼ Dir(αprior) where αprior ∈ RC+ is usually set to 1, and second predicts the input-dependent update β(v) which forms the posterior distribution p(v) ∼ Dir(αpost,(v)) where the posterior parameters are equal to αpost,(v) = αprior + β(v). (2) The variable β(v) can be interpreted as learned class pseudo-counts and its parametrization is crucial. For i.i.d. inputs, PostNet [14] models the pseudo-counts β(v) in two main steps. (1) it maps the inputs features x(v) onto a low-dimensional latent vector z(v) = fθ(x(v)) ∈ RH . (2), it fits one conditional probability density P(z(v)|c;φ) per class on this latent space with normalizing flows. The final pseudo count for class c is set proportional to its respective conditional density i.e. β (v) c = N P(z(v)|c;φ)P(c) where N is a total certainty budget and P(c) = 1C for balanced classes. Note that this implies α(v)0 = N P(z(v)|φ). This architecture has the advantage of decreasing the evidence outside the known distribution when increasing the evidence inside the known distribution, thus leading to consistent uncertainty estimation far from training data. Bayesian Update for Interdependent Inputs. We propose a simple yet efficient modification for parameterizing β(v)c to extend the input-dependent Bayesian update for interdependent attributed nodes. The core idea is to first predict the feature class pseudo-counts βft,(v) based on independent node features only, and then diffuse them to form the aggregated class pseudo-counts βagg,(v) based on neighborhood features. Hence, the feature class pseudo-counts βft,(v) intuitively act as uncertainty estimates without network effects while the aggregated class pseudo-counts βagg,(v) intuitively act as uncertainty estimates with network effects. To this end, GPN performs three main steps (see Fig. 2). (1) A (feature) encoder maps the features of v onto a low-dimensional latent representation z i.e. z(v) = fθ(x(v)) ∈ RH . In practice, we use a simple MLP encoder in our experiments similarly to APPNP [48]. (2) One conditional probability density per class P(z(v) | c;φ) is used to compute βft,(v)c i.e βft,(v)c ∝ P(z(v) | c;φ). Note that the the total feature evidence αft,(v)0 = ∑ c β ft,(v) c and the parameter p̄ft,(v) = β ft,(v) /αft,(v)0 are only based on node features and can be seen as epistemic and aleatoric uncertainty measures without network effects. In practice, we used radial normalizing flows for density estimation similarly to [14] and scaled the certainty N budget w.r.t. the latent dimension H similarly to [15]. (3) A Personalized Page Rank (PPR) message passing scheme is used to diffuse the feature class pseudo-counts βft,(v)c and form the aggregated class pseudo-counts βagg,(v)c i.e. βagg,(v)c = ∑ u∈V Πpprv,uβ ft,(u) c (3) where Πpprv,u are the dense PPR scores implicitly reflecting the importance of node u on v. We approximate the dense PPR scores using power iteration similarly to [48]. The aggregated pseudo-count β agg,(v) c is then used in the input-dependent Bayesian update (see Eq. 2). Remark that the scores Πpprv,u define a valid conditional distribution over all nodes associated to the PPR random walk (i.e.∑ u Π ppr v,u = 1). It can be viewed as a soft neighborhood for v accounting for all neighborhood hops through infinitely many message passing steps [48]. Hence, on one hand, the PPR scores define a probability distribution over nodes using the node edges only. On the other hand, the quantity P(z(u) | c;φ) defines a probability distribution over nodes using the node features only. Therefore, we can equiv- alently rewrite this step using probabilistic notations P(v |u) = Πpprv,u and P(u | c) = P(z(u) | c;φ): βagg,(v)c ∝ P̄(v | c) = ∑ u∈V P(v |u)P(u | c) (4) Interestingly, the quantity P̄(v | c) defines a valid distribution which normalizes over all node features and accounts for the soft neighborhood (i.e. ∫ ... ∫ P̄(v | c)dz(u1)...dz(u|V|) = 1). Hence, the message passing step is a simple but efficient method to transform the feature distributions of a single node into a joint distributions over the soft neighborhood features. Finally, the evidence α agg,(v) 0 = ∑ c β agg,(v) c and the parameter pagg,(v) = β agg,(v) /αagg,(v)0 are based on neighborhood features and can be seen as epistemic and aleatoric uncertainty measures with network effects. Remark that, the sequential processing of the features (i.e. steps (1)+(2)) and network information (i.e. step (3)) in GPN is a key element to differentiate between the uncertainty without and with network effects and is a building block to provably obey the axioms. GPN extends both APPNP [48] and PostNet [14] approaches. The key difference to APPNP is the density estimation modeling the epistemic uncertainty (i.e. steps (1)+(2)) and the input-dependent Bayesian update allowing to recover the prior prediction (i.e. Eq. 2). The key difference to PostNet is the PPR diffusion which accounts for dependence between nodes (step (3)). Optimization. We follow [14] and train GPN by minimizing the following Bayesian loss with two terms i.e.: L(v) = −Ep(v)∼Qpost,(v) [ logP(y(v) |p(v)) ] − λH [ Qpost,(v) ] (5) where λ is a regularization factor. It can be computed quickly in closed-form and provides theoretical guarantees for optimal solutions [14]. All parameters of GPN are trained jointly. Similarly to [15], we also observed that "warm-up" training for the normalizing flows is helpful. 3.3 Uncertainty Estimation Guarantees In this section, we provide theoretical guarantees showing that GPN fulfills the three axioms under mild assumptions given the specific definitions of concepts of aleatoric/epistemic uncertainty and with/without network effects presented in Sec. 3.2. Throughout this section, we consider a GPN model parameterized with a (feature) encoder fφ with piecewise ReLU activations, a PPR diffusion, and a density estimator P(zft,(v) |ω) with bounded derivatives. We present detailed proofs in appendix. The first theorem shows that GPN follows Ax. 3.1 and guarantees that GPN achieves reasonable uncertainty estimation on extreme node features without network effects: Theorem 1. Lets consider a GPN model. Let fφ(x(v)) = V (l)x(v) + a(l) be the piecewise affine representation of the ReLU network fφ on the finite number of affine regions Q(l) [7]. Suppose that V (l) have independent rows, then for any node v and almost anyx(v) we have P(fφ(δ · x(v)) | c;φ) → δ→∞ 0. Without network effects, it implies that βft,(v)c = β agg,(v) c → δ→∞ 0. The proof relies on two main points: the equivalence of the GPN and PostNet architectures without network effects, and the uncertainty guarantees of PostNet far from training data similarly to [15]. It intuitively states that, without network effects, GPN predict small evidence (i.e. βagg,(v) ≈ 0) far from training features (i.e. ||δ · x(v)|| → ∞) and thus recover the prior prediction (i.e. αpost,(v) ≈ αprior). Note that contrary to GPN, methods which do not account for node features (e.g. Label Propagation) or methods which only use ReLU activations [39] cannot validate Ax. 3.1. Further, methods which perform aggregation steps in early layers (e.g. GCN [46]) do not separate the processing of the feature and network information making unclear if they fulfill the Ax. 3.1 requirements. The second theorem shows that GPN follows Ax. 3.2 and guarantees that a node v becomes more epistemically certain if its neighbors are more epistemically certain: Theorem 2. Lets consider a GPN model. Then, given a node v, the aggregated feature evidence α agg,(v) 0 is increasing if the feature evidence α ft,(u) 0 of one of its neighbors u ∈ N (v) is increasing. The proof directly relies on Eq. 3. Intuitively, this theorem states that the epistemic uncertainty u (v) epist = −α agg, (v) 0 of a node v with network effects decreases if the epistemic uncertainty of the neighboring nodes without network effects decreases. Note that contrary to GPN, methods which do not model the epistemic uncertainty explicitly (e.g. GCN [46], GAT [92] or APPNP [48]) are not guaranteed to fulfil Ax. 3.2. The third theorem shows that GPN follows Ax. 3.3. It guarantees that a node v becomes more aleatorically uncertain if its neighbors are more aleatorically uncertain, or if a neighbor prediction disagrees more with the current node prediction: Theorem 3. Lets consider a GPN model. Lets denote p̄agg, (v) = βagg,(v)/αagg,(v)0 the diffused categorical prediction for node v where c∗ is its winning class. Further, lets denote p̄ft, (u) = βft,(v)/αft,(v)0 the nondiffused categorical prediction for a node u ∈ V . First, there exists normalized weights Π′v,u such that ∑ u∈V Π ′ v,uH [ Cat(p̄ft, (u)) ] ≤ H [ Cat(p̄agg, (v)) ] . Second, if for any node u ∈ V the probability of p̄ft, (u)c∗ decreases, then H [ Cat(p̄agg, (v)) ] increases. The proof of the first part of the theorem is based on the entropy convexity. Intuitively, it states that the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] of a node v with network effects is lower bounded by a weighted average of the aleatoric uncertainty without network effects of its soft neighborhood. The second part of the theorem intuitively states that if the prediction of a neighboring node u without neighbor effects disagrees more with the current class prediction c∗ of the node v, then the aleatoric uncertainty u(v)alea = H [ Cat(p̄agg, (v)) ] with network effects becomes higher. Note that contrary to GPN, methods which do not use edges (e.g. PostNet [14]) cannot validate Ax. 3.3 and Ax. 3.2. 3.4 Limitations & Impact OOD data close to ID data. While GPN is guaranteed to provide consistent uncertainty estimates for nodes with extreme OOD features, it does not guarantee any specific uncertainty estimation behavior for OOD data close to ID data. Note that there exist two possible desired behaviors for OOD close to ID data: being robust to small dataset shifts [78, 89] or detect near OOD data [98, 50, 13]. The duality of these two views makes unclear what would be the desired behavior even for i.i.d. data. Non-homophilic uncertainty. Our approach assumes that connected nodes are likely to have similar uncertainty estimates as defined in Ax. 3.2 and Ax. 3.3. Contrary to [105], we do not tackle the problem of heterophilic graphs where two neighboring nodes might reasonably have different uncertainty estimates. Task-specific OOD. Density estimation is shown to be inappropriate for OOD detection when acting directly on raw images [72, 17, 71] or on arbitrarily transformed space [54]. One of the reasons is that normalizing flows learn pixel correlations in images. This phenomena does not happen for tabular data with more semantic features [47]. First note that, similarly to tabular data, semantic node features are less likely to suffer from the same flaws. Second, following previous works [14, 15, 47, 69, 98], GPN mitigates this issue by using density estimation on a latent space which is low-dimensional and task-specific. Nonetheless, we emphasize that GPN provides predictive uncertainty estimates which depends on the considered task i.e. OOD data w.r.t. features which are not useful for the specific task are likely not to be encoded in the latent space, and thus not to be detected. Broader Impact. The Assessment List for Trustworthy AI (ALTAI) [1] includes robustness, safety, and accountability. Uncertainty estimation is a key element to make AI systems follow these values. For example. an automated decision maker should know when it does not know. In this regard, GPN significantly improves the reliability of predictions on interdependent data under perturbations even though a user should not blindly rely on it. Further, ALTAI also mentions privacy and fairness. Therein, we raise awareness on the risk of using interconnected information which can amplify privacy or fairness violation in the presence of personal data. 4 Experiments In this section, we provide an extensive evaluation set-up for uncertainty quantification for node classification. It compares GPN to 13 baselines on 8 datasets and consists in two task types. First, we evaluate the detection of OOD nodes with features perturbations and Left-Out classes. Second, we evaluate the robustness of accuracy, calibration and uncertainty metrics w.r.t. feature and edge shifts. 4.1 Set-up Ablation. In the experiments, GPN uses a MLP as feature encoder, radial normalizing flows [82] for the density estimation and a certainty budget N which scales with respect to the latent dimension [15]. We provide an ablation study covering aleatoric uncertainty through APPNP, feature-level estimates through PostNet, diffusing resulting pseudo-counts after training, and GPN with diffusion of log(βft,(v)c ) instead of β ft,(v) c (see App. E.1). The complete GPN model outperforms the ablated models for uncertainty estimation. Further, we provide a hyper-parameter study covering for example different number of flow layers, latent dimensions, PPR teleport probabilities (see App. E.2)). Baselines. We used 13 baselines covering a wide variety of models for semi-supervised node classification and uncertainty estimation. We show the results of 5 baselines in the main paper and the full results in appendix. It contains two standard GNNs (i.e. Vanilla GCN VGCN [46, 87] and APPNP [48]), one robust GNN (i.e. RGCN [104]), one dropout-based method for GNN (i.e. DropEdge [83]), two Graph Gaussian Processes methods (i.e. GGP [74] and Matern-GGP [12]), the Graph-based Kernel Dirichlet GCN method (i.e. GKDE-GCN [102]) and two parameter-less methods (i.e. GKDE [102] and Label Propagation LP see App.). Further, we also compared to direct adaptation of dropout (i.e. VGCN-Dropout[29]), ensemble (i.e. VGCN-Ensemble [52]), BNN (i.e. VGCN-BNN [9]) and energy-based models (i.e. VGCN-Energy [57]) to vanilla GCNs. All models are trained using the same number of layers and similar number of hidden dimensions. We used early stopping and report the used hyperparameters in appendix. The results are averaged over 10 initialization seeds per split. Further model details are given in appendix. Datasets. We used 8 datasets with different properties summarized in appendix. We show the results of 3 datasets in the main paper and the full results in appendix. It contains common citation network datasets (i.e. CoraML [65, 32, 31, 85], CiteSeer [32, 31, 85], PubMed [73], CoauthorPhysics [87] CoauthorCS [87]) and co-purchase datasets (i.e. AmazonPhotos [64, 87], AmazonComputers [64, 87]). The results are averaged over 10 initialization splits with a train/val/test split of 5%/15%/80% using stratified sampling. Further, we evaluate on the large OGBN Arxiv dataset with 169, 343 nodes and 2, 315, 598 edges [43, 94]. Further dataset details are given in the appendix. 4.2 Results OOD Detection. In this section, we evaluate uncertainty estimation for OOD detection. To this end, we use the Area Under Receiving Operator Characteristics Curve (AUC-ROC) with aleatoric scores u(v)alea (Alea) and epistemic scores u (v) epist (Epist) similarly to [14, 102, 60, 63, 61, 57]. For GPN, we differentiate between epistemic uncertainty scores without network effects (w/o Net.) and with network effects (w/ Net.). Further, we report results with the Area Under the Precision-Recall Curve (AUC-PR) in appendix. The definition of OOD for nodes in the presence of feature and network information is more complex than for i.i.d. input features. Hence, we propose two types of OOD nodes: nodes with OOD feature perturbations and nodes from Left-Out classes. For feature perturbations, we compute the accuracy on the perturbed nodes (OOD-Acc) to evaluate if the model can correct anomalous features. For Left-Out classes, we compute the accuracy on the observed classes (ID-Acc). We report the short results in Tab. 1. We set a threshold of 64 GiB and 12 hours per training run. We also exclude methods which do not use attributes for detection of OOD feature perturbations. Feature perturbations: These perturbations aim at isolating the contribution of the node feature information on the model predictions. To this end, we randomly select a subset of the nodes. For each single node v, we perturb individually its features using a Bernoulli or a Normal distribution (i.e. x(v) ∼ Ber(0.5) and x(v) ∼ N (0,1)) keeping all other node features fixed. We then compare the uncertainty prediction on the perturbed and unperturbed node. On one hand, Bernoulli noise corresponds to small perturbations in the domain of discrete bag-of-words features. On the other hand, Normal noise corresponds to extreme perturbations out of the domain of discrete bag-of-words features. In practice, we expect out-of-domain perturbations to be easily detected [14]. First, we remark that uncertainty estimates of GPN based on features achieves an absolute improvement of at least +15% and +29% for Bernoulli and Normal perturbations over all baselines using network effects. This shows that GPN disentangles well the uncertainty without and with network effects. Second, we remark that all uncertainty estimates with network effects achieve poor results. This is expected if models can recover the correct prediction after aggregation steps. Specifically, we observe that GPN achieves an accuracy improvement between +16% and +64% for Normal perturbations on perturbed nodes compared to baselines. It stresses that GPN performs a consistent evidence aggregation from neighborhood to recover from anomalous features. Further, note that GPN is still capable to detect those perturbed nodes almost perfectly using feature uncertainty. These remarks aligns with Ax. 3.1. Left-Out classes: Detection of Left-Out classes involves both feature and neighborhood information. In this case, we remove the Left-Out classes from the training set but keep them in the graph similarly to [102]. We observe that the uncertainty estimates with network effects of GPN achieves an absolute improvement between +12% and +16% compared to its uncertainty estimates without network effects. It highlights the benefit of incorporating network information for uncertainty predictions when OOD samples (i.e. samples from the Left-Out classes) are likely to be connected to each other. This remark aligns with Ax. 3.2. Further, GPN outperforms other baselines by +2% to +22% for LOC detection while maintaining a competitive accuracy on other classes. Misclassified samples: In addition to the OOD scores, we also report the results for the detection of misclassified samples with aleatoric and epistemic uncertainty on several datasets and models in App. E.3 for the sake of completeness. GPN performs competitively with the baselines. Moreover, we observe that epistemic uncertainty is better for OOD detection and aleatoric uncertainty is better for misclassification detection as already observed e.g. in [102]. Attributed Graph Shifts. In this section, we focus on evaluating the robustness of the accuracy, calibration and the evolution of the uncertainty estimation under node feature shifts and edges shifts. This aligns with [78] which aims at evaluating the reliability of uncertainty estimates under dataset shifts for i.i.d. inputs. Specifically, we evaluates the evolution of the accuracy, the ECE [70] calibration score, the epistemic and the aleatoric uncertainty measures. Feature shifts: We perturbed the features of a fraction of the nodes using unit Gaussian perturbations. We report the short results in Fig. 3 and the full results in appendix. On one hand, we observe that GPN is significantly more robust to feature perturbations than all baselines. Indeed, the accuracy of GPN decreases by less than 5% even when 80% of the nodes are perturbed while the accuracy of other baselines decreases by more than 50% when only 20% of the nodes are perturbed. Similarly, we observed that GPN remains calibrated even when a high fraction of nodes are perturbed contrary to baselines. Hence, GPN intuitively discards uncertain features from perturbed nodes and only accounts for certain features from other nodes for more accurate predictions. On the other hand, we observe that, as desired, the average epistemic uncertainty of GPN consistently decreases when more nodes are perturbed. This remark aligns with Ax. 3.2. In contrast, baselines dangerously become more certain while achieving a poorer accuracy similarly to ReLU networks [39]. Hence GPN predictions are significantly more reliable than baselines under feature shifts. Edge shifts: For edge shifts, we perturbed a fraction of edges at random. We report the results in appendix. As desired, we observe that the aleatoric uncertainty increases for all models including GPN. This aligns with Ax. 3.3 and the expectations that conflicting neighborhood should lead to more aleatorically uncertain predictions. Furthermore, the average epistemic uncertainty of GPN remains constant which is reasonable since the average evidence of a node’s neighborhood remains constant. Qualitative Evaluation. We show the abstracts of the CoraML papers achieving the highest and the lowest epistemic uncertainty without network effects in Tab. 2 and in the appendix. Interestingly, we observed that most uncertain papers corresponds to short and unconventional abstracts which can be seen as anomalous features. Furthermore, we also ranked the nodes w.r.t. to their epistemic uncertainty with network effects. In this case, we observed that 78/100 nodes with the highest uncertainty do not belong to the largest connected component of the CoraML dataset. We propose additional uncertainty visualizations for GPN in App. E.6. Inference & training time. We provide a comparison of inference and training times for most of the datasets and models under consideration in in App. E.7. GPN needs a single pass for uncertainty estimation but requires the additional evaluation of one normalizing flow per class compared to APPNP. Hence, GPN brings a small computational overhead for uncertainty estimation at inference time. Furthermore, GPN is usually converging relatively fast during training and does not require precomputing kernel values. In contrast, GKDE-GCN [102] requires the computation of the underlying Graph Kernel with a complexity of O ( N2 ) where N is the number of nodes in the graph. Finally, GPN is significantly more efficient than dropout or ensemble approaches as it does not require training or evaluating multiple models. 5 Conclusion We introduce a well-grounded framework for uncertainty estimation on interdependent nodes. First, we propose explicit and motivated axioms describing desired properties for aleatoric and epistemic uncertainty in the absence or in the presence of network effects. Second, we propose GPN, a GNN for uncertainty estimation which provably follows our axioms. GPN performs a Bayesian update over the class predictions based on density estimation and diffusion. Third, we conduct extensive experiments to evaluate the uncertainty performances of a broad range of baselines for OOD detection and robustness against node feature or edge shifts. GPN outperforms all baselines in these experiments. Acknowledgments and Disclosure of Funding This research was supported by the BMW AG, by the Helmholtz Association under the joint research school “Munich School for Data Science - MUDS“, and by a grant from Software Campus through the German Federal Ministry of Education and Research.
1. What is the focus and contribution of the paper on graph learning? 2. What are the strengths of the proposed approach, particularly in terms of uncertainty estimation? 3. Do you have any concerns regarding the representation of interdependent node inputs? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or areas for improvement in the proposed method?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors proposed a graph posterior network based on the uncertainty theory to enhance the node classification in graph learning. The uncertainty estimation based on the interdependent node inputs instead of assuming i.i.d inputs is interesting. Through theoretical analysis and experiments on 8 data sets, they claim that the proposed framework has advantages in node classification and OOD detection. Review In general, the node classification based on uncertainty is not a new problem. However, the proposed method study the influence of the interdependent inputs on GNN is new. Furthermore, the proposed Graph Posterior Network is a combination of two existing methods. One is APPNP [47], the other is PostNet [14]. It is good to see that the authors discussed the key difference between this paper and the above two papers. The authors provide relatively comprehensive related works from two perspectives (Uncertainty for i.i.d. inputs and Uncertainty for graphs). This submission is technically sound. Most claims are well supported by theoretical analysis and experiments. The proposed method theoretically proves the appropriateness of the fusion of the two known methods. Furthermore, the authors also discussed the limitations of their approach, which is good for understanding the applicability of their method in real-world tasks. This work is complete. The paper is well-written in several sections. However, the Axioms and Theorems in Section 3 seem too dense. I suggest the authors can draw a figure to show the relationships between each Axiom and each theorem. The authors had provided many experiment details in the Appendix to reproduce the results in the main paper. The results shown in this paper seem important. The researchers or practitioners may like to use the ideas from this paper in their research works. According to the experiment results, this paper advances the state of the art in a demonstrable way. The proposed method is tested on several existing data. There are no unique data, unique theoretical, and unique experimental approaches. My major comments are as follows, In table 1, why baseline methods do not have values in [Epist w/o Net]? What is the meaning of '_'? It is hard to see the efficiency of the proposed methods compared to the baseline methods. My minor comments are as follows, In line 359, Tab.3 should be Figure 3. In line 375, Tab.10 should be Table 2.